BinaryFormatter.Deserialize hangs the whole thread - c#

I have two simple applications connected via named pipes. In the client side I have a method that checks incoming messages every n ms:
private void timer_Elapsed(Object sender, ElapsedEventArgs e)
{
IFormatter f = new BinaryFormatter();
try
{
object temp = f.Deserialize(pipeClient); //hangs here
result = (Func<T>)temp;
}
catch
{
}
}
In the beginning the pipe is empty, and f.Deserialize method hangs the whole application. And I can't even check that pipe's empty? Is there any solution to this problem?
UPD: tried XmlSerializer, everything's the same.

The thing that is hanging on you is the pipeClient.Read( call both formatters are making internally.
This is the expected behavior of a Stream, when you call Read:
Return Value
Type: System.Int32
The total number of bytes that are read into buffer. This might be less than the number of bytes
requested if that number of bytes is not currently available, or 0 if
the end of the stream is reached.
So the stream will block till data shows up or throw a timeout exception if it is the type of stream that supports timeouts. It will never just return without reading anything unless you are "at the end of the stream" which for a PipeStream (or similarly a NetworkStream) only happens when the connection is closed.
The way you solve the problem is don't use a timer to check if a new message arrives, just start up a background thread and have it sitting in a loop, it will block itself until a message shows up.
class YourClass
{
public YourClass(PipeStream pipeClient)
{
_pipeClient = pipeClient;
var task = new Task(MessageHandler, TaskCreationOptions.LongRunning);
task.Start();
}
//SNIP...
private void MessageHandler()
{
while(_pipeClient.IsConnected)
{
IFormatter f = new BinaryFormatter();
try
{
object temp = f.Deserialize(_pipeClient);
result = (Func<T>)temp;
}
catch
{
//You really should do some kind of logging.
}
}
}
}

Related

What is the correct way to use USBHIDDRIVER for multiple writes?

I am writing an application that needs to write messages to a USB HID device and read responses. For this purpose, I'm using USBHIDDRIVER.dll (https://www.leitner-fischer.com/2007/08/03/hid-usb-driver-library/ )
Now it works fine when writing many of the message types - i.e. short ones.
However, there is one type of message where I have to write a .hex file containing about 70,000 lines. The protocol requires that each line needs to be written individually and sent in a packet containing other information (start, end byte, checksum)
However I'm encountering problems with this.
I've tried something like this:
private byte[] _responseBytes;
private ManualResetEvent _readComplete;
public byte[][] WriteMessage(byte[][] message)
{
byte[][] devResponse = new List<byte[]>();
_readComplete = new ManualResetEvent(false);
for (int i = 0; i < message.Length; i++)
{
var usbHid = new USBInterface("myvid", "mypid");
usbHid.Connect();
usbHid.enableUsbBufferEvent(UsbHidReadEvent);
if (!usbHid.write(message)) {
throw new Exception ("Write Failed");
}
usbHid.startRead();
if (!_readComplete.WaitOne(10000)) {
usbHid.stopRead();
throw new Exception ("Timeout waiting for read");
}
usbHid.stopRead();
_readComplete.Reset();
devResponse.Add(_responseBytes.ToArray());
usbHid = null;
}
return devResponse;
}
private void ReadEvent()
{
if (_readComplete!= null)
{
_readComplete.Set();
}
_microHidReadBytes = (byte[])((ListWithEvent)sender)[0];
}
This appears to work. In WireShark I can see the messages going back and forth. However as you can see it's creating an instance of the USBInterface class every iteration. This seems very clunky and I can see in the TaskManager, it starts to eat up a lot of memory - current run has it above 1GB and eventually it falls over with an OutOfMemory exception. It is also very slow. Current run is not complete after about 15 mins, although I've seen another application do the same job in less than one minute.
However, if I move the creation and connection of the USBInterface out of the loop as in...
var usbHid = new USBInterface("myvid", "mypid");
usbHid.Connect();
usbHid.enableUsbBufferEvent(UsbHidReadEvent);
for (int i = 0; i < message.Length; i++)
{
if (!usbHid.write(message)) {
throw new Exception ("Write Failed");
}
usbHid.startRead();
if (!_readComplete.WaitOne(10000)) {
usbHid.stopRead();
throw new Exception ("Timeout waiting for read");
}
usbHid.stopRead();
_readComplete.Reset();
devResponse.Add(_responseBytes.ToArray());
}
usbHid = null;
... now what happens is it only allows me to do one write! I write the data, read the response and when it comes around the loop to write the second message, the application just hangs in the write() function and never returns. (Doesn't even time out)
What is the correct way to do this kind of thing?
(BTW I know it's adding a lot of data to that devResponse object but this is not the source of the issue - if I remove it, it still consumes an awful lot of memory)
UPDATE
I've found that if I don't enable reading, I can do multiple writes without having to create a new USBInterface1 object with each iteration. This is an improvement but I'd still like to be able to read each response. (I can see they are still sent down in Wireshark)

TCP server connection causing processor overload

I have a TCP/IP server that is supposed to allow a connection to remain open as messages are sent across it. However, it seems that some clients open a new connection for each message, which causes the CPU usage to max out. I tried to fix this by adding a time-out but still seem to have the problem occasionally. I suspect that my solution was not the best choice, but I'm not sure what would be.
Below is my basic code with logging, error handling and processing removed.
private void StartListening()
{
try
{
_tcpListener = new TcpListener( IPAddress.Any, _settings.Port );
_tcpListener.Start();
while (DeviceState == State.Running)
{
var incomingConnection = _tcpListener.AcceptTcpClient();
var processThread = new Thread( ReceiveMessage );
processThread.Start( incomingConnection );
}
}
catch (Exception e)
{
// Unfortunately, a SocketException is expected when stopping AcceptTcpClient
if (DeviceState == State.Running) { throw; }
}
finally { _tcpListener?.Stop(); }
}
I believe the actual issue is that multiple process threads are being created, but are not being closed. Below is the code for ReceiveMessage.
private void ReceiveMessage( object IncomingConnection )
{
var buffer = new byte[_settings.BufferSize];
int bytesReceived = 0;
var messageData = String.Empty;
bool isConnected = true;
using (TcpClient connection = (TcpClient)IncomingConnection)
using (NetworkStream netStream = connection.GetStream())
{
netStream.ReadTimeout = 1000;
try
{
while (DeviceState == State.Running && isConnected)
{
// An IOException will be thrown and captured if no message comes in each second. This is the
// only way to send a signal to close the connection when shutting down. The exception is caught,
// and the connection is checked to confirm that it is still open. If it is, and the Router has
// not been shut down, the server will continue listening.
try { bytesReceived = netStream.Read( buffer, 0, buffer.Length ); }
catch (IOException e)
{
if (e.InnerException is SocketException se && se.SocketErrorCode == SocketError.TimedOut)
{
bytesReceived = 0;
if(GlobalSettings.IsLeaveConnectionOpen)
isConnected = GetConnectionState(connection);
else
isConnected = false;
}
else
throw;
}
if (bytesReceived > 0)
{
messageData += Encoding.UTF8.GetString( buffer, 0, bytesReceived );
string ack = ProcessMessage( messageData );
var writeBuffer = Encoding.UTF8.GetBytes( ack );
if (netStream.CanWrite) { netStream.Write( writeBuffer, 0, writeBuffer.Length ); }
messageData = String.Empty;
}
}
}
catch (Exception e) { ... }
finally { FileLogger.Log( "Closing the message stream.", Verbose.Debug, DeviceName ); }
}
}
For most clients the code is running correctly, but there are a few that seem to create a new connection for each message. I suspect that the issue lies around how I handle the IOException. For the systems that fail, the code does not seem to reach the finally statement until 30 seconds after the first message comes in, and each message creates a new ReceiveMessage thread. So the logs will show messages coming in, and 30 seconds in it will start to show multiple messages about the message stream being closed.
Below is how I check the connection, in case this is important.
public static bool GetConnectionState( TcpClient tcpClient )
{
var state = IPGlobalProperties.GetIPGlobalProperties()
.GetActiveTcpConnections()
.FirstOrDefault( x => x.LocalEndPoint.Equals( tcpClient.Client.LocalEndPoint )
&& x.RemoteEndPoint.Equals( tcpClient.Client.RemoteEndPoint ) );
return state != null ? state.State == TcpState.Established : false;
}
You're reinventing the wheel (in a worse way) at quite a few levels:
You're doing pseudo-blocking sockets. That combined with creating a whole new thread for every connection in an OS like Linux which doesn't have real threads can get expensive fast. Instead you should create a pure blocking socket with no read timeout (-1) and just listen on it. Unlike UDP, TCP will catch the connection being terminated by the client without you needing to poll for it.
And the reason why you seem to be doing the above is that you reinvent the standard Keep-Alive TCP mechanism. It's already written and works efficiently, simply use it. And as a bonus, the standard Keep-Alive mechanism is on the client side, not the server side, so even less processing for you.
Edit: And 3. You really need to cache the threads you so painstakingly created. The system thread pool won't suffice if you have that many long-term connections with a single socket communication per thread, but you can build your own expandable thread pool. You can also share multiple sockets on one thread using select, but that's going to change your logic quite a bit.

C# NetworkStream ReadAsync, data split between buffers

I'm currently using the ReadAsync method in NetworkStream, and thus far its been pretty stable. The problem occurs when the data received from the server is larger than the buffer itself.
ReadCallBack assumes that the buffer contains a complete and valid message, as a new buffer is created on every call to BeginAsyncRead. Thus, when the response is too large for one buffer, it gets split between multiple reads and thus multiple buffers.
My ReadCallBack then assumes each buffer is an independent message (instead of adding them together) and ends up failing the deserialization on both.
My question is, what is the correct way to handle messages that are bigger than the size of the buffer, specially if you want to do the reads asynchronously?
A character itself could be split between two buffers so this gets quite tricky.
Below is my code for ReadAsync and ReadCallBack.
public async Task ReadForever() {
while(this.IsConnected) {
await BeginAsyncRead();
}
}
public async Task BeginAsyncRead() {
if (this.Stream.CanRead) {
try {
ReceiverState readState = new ReceiverState(this.Stream);
var task = this.Stream.ReadAsync(readState.buffer, 0, ReceiverState.bufferSize);
await task.ContinueWith(bytesRead => ReadCallBack(bytesRead.Result, readState));
}
catch (Exception e) {
Console.WriteLine("Error");
}
} else {
Console.WriteLine("Unreadable stream");
}
}
private void ReadCallBack(int numBytesRead, ReceiverState state) {
if (numBytesRead > 0) {
string payload = Encoding.ASCII.GetString(state.buffer, 0, numBytesRead);
string[] responses = payload.Split(new string[] {"\n"}, StringSplitOptions.RemoveEmptyEntries);
foreach (string res in responses){
var t = Task.Run(() => this.OnRead(res));
}
} else {
Console.WriteLine("Corrupted number of bytes received");
}
}
Note : All responses sent by the server contain a newline at the end.
Note : ReadForever is called inside Task.Run, my application can receive messages from the server as notifications, thus I must always be reading for incoming messages (I don't know when a notification may arrive).

networkstream always empty!

hey I'm writing on an Server-Client program
but when my client sends something, it never reaches my server!
I'm sending like this:
public void Send(string s)
{
char[] chars = s.ToCharArray();
byte[] bytes = chars.CharToByte();
nstream.Write(bytes, 0, bytes.Length);
nstream.Flush();
}
and Receiving in a background thread like this
void CheckIncoming(object dd)
{
RecievedDelegate d = (RecievedDelegate)dd;
try
{
while (true)
{
List<byte> bytelist = new List<byte>();
System.Threading.Thread.Sleep(1000);
int ssss;
ssss = nstream.ReadByte();
if (ssss > 1)
{
System.Diagnostics.Debugger.Break();
}
if (bytelist.Count != 0)
{
d.Invoke(bytelist.ToArray());
}
}
}
catch (Exception exp)
{
MSGBOX("ERROR:\n" + exp.Message);
}
}
the ssss int is never > 1
whats happening here???
NetworkStream.Flush() actually has no effect:
The Flush method implements the Stream..::.Flush method; however, because NetworkStream is not buffered, it has no affect [sic] on network streams. Calling the Flush method does not throw an exception
How much data is being sent?
If you don't send enough data it may remain buffered at the network level, until you close the stream or write more data.
See the TcpClient.NoDelay property for a way to disable this, if you are only going to be sending small chunks of data and require low latency.
You should change the check of the return value to if (ssss >= 0).
ReadByte returns a value greater or equal 0 if it succeeds to read a byte (source).
To elaborate on Marc's comment: How is nstream created? Maybe there is an underlying class that does not flush.
well, Im creating a TcpClient, and use GetStream(); to get the NetworkStream

Are there well-known patterns for asynchronous network code in C#?

I recently wrote a quick-and-dirty proof-of-concept proxy server in C# as part of an effort to get a Java web application to communicate with a legacy VB6 application residing on another server. It's ridiculously simple:
The proxy server and clients both use the same message format; in the code I use a ProxyMessage class to represent both requests from clients and responses generated by the server:
public class ProxyMessage
{
int Length; // message length (not including the length bytes themselves)
string Body; // an XML string containing a request/response
// writes this message instance in the proper network format to stream
// (helper for response messages)
WriteToStream(Stream stream) { ... }
}
The messages are as simple as could be: the length of the body + the message body.
I have a separate ProxyClient class that represents a connection to a client. It handles all the interaction between the proxy and a single client.
What I'm wondering is are they are design patterns or best practices for simplifying the boilerplate code associated with asynchronous socket programming? For example, you need to take some care to manage the read buffer so that you don't accidentally lose bytes, and you need to keep track of how far along you are in the processing of the current message. In my current code, I do all of this work in my callback function for TcpClient.BeginRead, and manage the state of the buffer and the current message processing state with the help of a few instance variables.
The code for my callback function that I'm passing to BeginRead is below, along with the relevant instance variables for context. The code seems to work fine "as-is", but I'm wondering if it can be refactored a bit to make it clearer (or maybe it already is?).
private enum BufferStates
{
GetMessageLength,
GetMessageBody
}
// The read buffer. Initially 4 bytes because we are initially
// waiting to receive the message length (a 32-bit int) from the client
// on first connecting. By constraining the buffer length to exactly 4 bytes,
// we make the buffer management a bit simpler, because
// we don't have to worry about cases where the buffer might contain
// the message length plus a few bytes of the message body.
// Additional bytes will simply be buffered by the OS until we request them.
byte[] _buffer = new byte[4];
// A count of how many bytes read so far in a particular BufferState.
int _totalBytesRead = 0;
// The state of the our buffer processing. Initially, we want
// to read in the message length, as it's the first thing
// a client will send
BufferStates _bufferState = BufferStates.GetMessageLength;
// ...ADDITIONAL CODE OMITTED FOR BREVITY...
// This is called every time we receive data from
// the client.
private void ReadCallback(IAsyncResult ar)
{
try
{
int bytesRead = _tcpClient.GetStream().EndRead(ar);
if (bytesRead == 0)
{
// No more data/socket was closed.
this.Dispose();
return;
}
// The state passed to BeginRead is used to hold a ProxyMessage
// instance that we use to build to up the message
// as it arrives.
ProxyMessage message = (ProxyMessage)ar.AsyncState;
if(message == null)
message = new ProxyMessage();
switch (_bufferState)
{
case BufferStates.GetMessageLength:
_totalBytesRead += bytesRead;
// if we have the message length (a 32-bit int)
// read it in from the buffer, grow the buffer
// to fit the incoming message, and change
// state so that the next read will start appending
// bytes to the message body
if (_totalBytesRead == 4)
{
int length = BitConverter.ToInt32(_buffer, 0);
message.Length = length;
_totalBytesRead = 0;
_buffer = new byte[message.Length];
_bufferState = BufferStates.GetMessageBody;
}
break;
case BufferStates.GetMessageBody:
string bodySegment = Encoding.ASCII.GetString(_buffer, _totalBytesRead, bytesRead);
_totalBytesRead += bytesRead;
message.Body += bodySegment;
if (_totalBytesRead >= message.Length)
{
// Got a complete message.
// Notify anyone interested.
// Pass a response ProxyMessage object to
// with the event so that receivers of OnReceiveMessage
// can send a response back to the client after processing
// the request.
ProxyMessage response = new ProxyMessage();
OnReceiveMessage(this, new ProxyMessageEventArgs(message, response));
// Send the response to the client
response.WriteToStream(_tcpClient.GetStream());
// Re-initialize our state so that we're
// ready to receive additional requests...
message = new ProxyMessage();
_totalBytesRead = 0;
_buffer = new byte[4]; //message length is 32-bit int (4 bytes)
_bufferState = BufferStates.GetMessageLength;
}
break;
}
// Wait for more data...
_tcpClient.GetStream().BeginRead(_buffer, 0, _buffer.Length, this.ReadCallback, message);
}
catch
{
// do nothing
}
}
So far, my only real thought is to extract the buffer-related stuff into a separate MessageBuffer class and simply have my read callback append new bytes to it as they arrive. The MessageBuffer would then worry about things like the current BufferState and fire an event when it received a complete message, which the ProxyClient could then propagate further up to the main proxy server code, where the request can be processed.
I've had to overcome similar problems. Here's my solution (modified to fit your own example).
We create a wrapper around Stream (a superclass of NetworkStream, which is a superclass of TcpClient or whatever). It monitors reads. When some data is read, it is buffered. When we receive a length indicator (4 bytes) we check if we have a full message (4 bytes + message body length). When we do, we raise a MessageReceived event with the message body, and remove the message from the buffer. This technique automatically handles fragmented messages and multiple-messages-per-packet situations.
public class MessageStream : IMessageStream, IDisposable
{
public MessageStream(Stream stream)
{
if(stream == null)
throw new ArgumentNullException("stream", "Stream must not be null");
if(!stream.CanWrite || !stream.CanRead)
throw new ArgumentException("Stream must be readable and writable", "stream");
this.Stream = stream;
this.readBuffer = new byte[512];
messageBuffer = new List<byte>();
stream.BeginRead(readBuffer, 0, readBuffer.Length, new AsyncCallback(ReadCallback), null);
}
// These belong to the ReadCallback thread only.
private byte[] readBuffer;
private List<byte> messageBuffer;
private void ReadCallback(IAsyncResult result)
{
int bytesRead = Stream.EndRead(result);
messageBuffer.AddRange(readBuffer.Take(bytesRead));
if(messageBuffer.Count >= 4)
{
int length = BitConverter.ToInt32(messageBuffer.Take(4).ToArray(), 0); // 4 bytes per int32
// Keep buffering until we get a full message.
if(messageBuffer.Count >= length + 4)
{
messageBuffer.Skip(4);
OnMessageReceived(new MessageEventArgs(messageBuffer.Take(length)));
messageBuffer.Skip(length);
}
}
// FIXME below is kinda hacky (I don't know the proper way of doing things...)
// Don't bother reading again. We don't have stream access.
if(disposed)
return;
try
{
Stream.BeginRead(readBuffer, 0, readBuffer.Length, new AsyncCallback(ReadCallback), null);
}
catch(ObjectDisposedException)
{
// DO NOTHING
// Ends read loop.
}
}
public Stream Stream
{
get;
private set;
}
public event EventHandler<MessageEventArgs> MessageReceived;
protected virtual void OnMessageReceived(MessageEventArgs e)
{
var messageReceived = MessageReceived;
if(messageReceived != null)
messageReceived(this, e);
}
public virtual void SendMessage(Message message)
{
// Have fun ...
}
// Dispose stuff here
}
I think the design you've used is fine that's roughly how I would and have done the same sort of thing. I don't think you'd gain much by refactoring into additional classes/structs and from what I've seen you'd actually make the solution more complex by doing so.
The only comment I'd have is as to whether the two reads where the first is always the messgae length and the second always being the body is robust enough. I'm always wary of approaches like that as if they somehow get out of sync due to an unforseen circumstance (such as the other end sending the wrong length) it's very difficult to recover. Instead I'd do a single read with a big buffer so that I always get all the available data from the network and then inspect the buffer to extract out complete messages. That way if things do go wrong the current buffer can just be thrown away to get things back to a clean state and only the current messages are lost rather than stopping the whole service.
Actually at the moment you would have a problem if you message body was big and arrived in two seperate receives and the next message in line sent it's length at the same time as the second half of the previous body. If that happened your message length would end up appended to the body of the previous message and you'd been in the situation as desecribed in the previous paragraph.
You can use yield return to automate the generation of a state machine for asynchronous callbacks. Jeffrey Richter promotes this technique through his AsyncEnumerator class, and I've played around with the idea here.
There's nothing wrong with the way you've done it. For me, though, I like to separate the receiving of the data from the processing of it, which is what you seem to be thinking with your proposed MessageBuffer class. I have discussed that in detail here.

Categories