I've built a C# application using NamedPipeServerStream/NamedPipeClientStream, and I'm having trouble serializing objects when the client reads too slowly. Here is the minimal example I can put together.
The client (pure consumer):
NamedPipeClientStream pipeClient;
if (useString)
reader = new StreamReader(pipeClient);
while (true)
{
Thread.Sleep(5000);
if (useString)
{
string line = reader.ReadLine();
}
else
{
Event evt = (Event)formatter.Deserialize(pipeClient);
}
}
The (pure producer):
while (true)
{
i++;
Thread.Sleep(1000);
if (useStrings)
{
StreamWriter writer = new StreamWriter(m_pipeServer);
writer.WriteLine("START data payload {0} END", i);
writer.Flush();
}
else
{
BinaryFormatter formatter = new BinaryFormatter();
formatter.Serialize(m_pipeServer, new Event(i));
}
m_pipeServer.Flush();
m_pipeServer.WaitForPipeDrain();
}
And "Event" is a simple class with a single property tracking the payload: i.
The behavior I expect is simply "missing" events, when the server produces too much for the client to read. However, in the string case I get a random ordering of events:
START data payload 0 END
START data payload 1 END
START data payload 2 END
START data payload 4 END
START data payload 15 END
START data payload 16 END
START data payload 24 END
START data payload 3 END
START data payload 35 END
START data payload 34 END
START data payload 17 END
And for the binary serializer I get an exception (this is less surprising):
SerializationException: Binary stream '0' does not contain a valid BinaryHeader. Possible causes are invalid stream or object version change between serialization and deserialization.
Lastly, note that if I remove the call to Sleep on the client, everything works fine: all events are received, in order (as expected).
So I'm trying to figure out how to do serialize binary events over a named pipe when the client may read too slowly and miss events. In my scenario, missing events is completely fine. However, but I'm surprised at the string events coming out of order intact instead of truncated (due to buffer rollover) or simply dropped.
The binary formatter case is actually the one I care about. I'm trying to serialize and pass relatively small events (~300 bytes) across a named pipe to multiple consumer programs, but I'm concerned those clients won't be able to keep up with the volume.
How do I properly produce/consume these events across a named pipe if we exhaust the buffer? My desired behavior is simply dropping events that the client can't keep up with.
I wouldn't trust the transport layer (i.e. the pipe) to drop packets that the client can't keep up with. I would create a circular queue on the client. A dedicated thread would then service the pipe and put messages on the queue. A separate client thread (or multiple threads) would service the queue. Doing it this way, you should be able to keep the pipe clean.
Since it's a circular queue, newer messages will overwrite older ones. A client reading will always get the oldest message that hasn't yet been processed.
Creating a circular queue is pretty easy, and you can even make it implement IProducerConsumerCollection so that you can use it as the backing store for BlockingCollection.
Related
I have a TCP request response model in C# where I am communicating with a server. Once the server has written data to the stream, I am reading that data. But stream.read is taking 2 seconds to read the data. I need to send an explicit acknowledgement to the server, within 2 seconds but am unable to do so because of the time taken to read the data.
Below is my code to read data:
byte[] resp = new byte[100000];
var memoryStream = new MemoryStream();
int bytes;
String timeStamp = GetTimestamp(DateTime.Now);
Console.WriteLine("Before reading data: ");
Console.WriteLine(timeStamp);
do
{
bytes = stream.Read(resp, 0, resp.Length);
memoryStream.Write(resp, 0, bytes);
}
while (bytes > 0);
timeStamp = GetTimestamp(DateTime.Now);
Console.WriteLine("After reading data: ");
Console.WriteLine(timeStamp);
GenerateAcknowledgemnt(stream);
timeStamp = GetTimestamp(DateTime.Now);
Console.WriteLine("After sending ack: ");
Console.WriteLine(timeStamp);
Below are the timestamps read, in the format yyyyMMddHHmmssff:
Before reading data:
2022050615490817
After reading data:
2022050615491019
After sending ack:
2022050615491020
I have highlighted the seconds bold.
How do I reduce the time that stream.read is taking to read? I have tried to wrap the network stream in a BufferedStream as well, but it didn't help.
At the moment, you are performing a read loop that keeps going until Read returns a non-positive number; in TCP, this means you are waiting until the other end hangs up (or at least hangs up their outbound socket) until you get out of that loop. I suspect what is happening is that the other end is giving up on you, closing their connection, and only then do you get out of the loop.
Basically: you can't loop like that; instead, what you need to do is to carefully read until either EOF (bytes <= 0) or until you have at least one complete frame that you can respond to, and in the latter case: respond then. This usually means a loop more like (pseudo-code):
while (TryReadSomeMoreData()) // performs a read into the buffer, positive result
{
// note: may have more than one frame per successful 'read'
while (TryParseOneFrame(out frame))
{
ProcessFrame(frame); // includes sending responses
// (and discard anything that you've now processed from the back-buffer)
}
}
(parsing a frame here means: following whatever rules apply about isolating a single message from the stream - this may mean looking for a sentinel value such as CR/LF/NUL, or may mean checking if you have enough bytes to read a header that includes a length, and then checking that you have however-many bytes the header indicates as the payload)
This is a little awkward if you're using MemoryStream as the backlog, as the discard step is not convenient; the "pipelines" API is more specifically designed for this, but: either way can work.
Secondly: you may prefer async IO, although sync IO is probably fine for a simple client application with only one connection (but not for servers, which may have many many connections).
I am using ZMQ NetMQ package in c# to receive the message from the subscriber. I am able to receive the msg but I am sticking in the while loop. I want to break the while loop if the publisher is stopped sending data.
Here is my subscriber code:
using (var subscriber = new SubscriberSocket())
{
subscriber.Connect("tcp://127.0.0.1:4000");
subscriber.Subscribe("A");
while (true)
{
var msg = subscriber.ReceiveFrameString();
Console.WriteLine(msg);
}
Q : "How to check ZMQ publisher is alive or not in c# ?"
A :There are at least two ways to do so :
a )modify the code on both the PUB-side and SUB-side, so that the Publisher sends both the PUB/SUB-channel messages, and independently of that also PUSH/PULL-keep-alive messages to prove to the SUB-side it is still alive, as being autonomously received as confirmations from the PULL-AccessPoint on the SUB-side loop. Not receiving such soft-keep-alive message for some time may trigger the SUB-side loop to become sure to break. The same principle may get served by a reversed PUSH/PULL-channel, where SUB-side, from time to time, asks the PUB-side, listening on the PULL-side, using asynchronously sent soft-request message to inject a soft-keep-alive message into the PUB-channel ( remember the TOPIC-filter is a plain ASCII-filtering from the left to the right of the message-payload, so PUSH-delivered message could as easily send the exact text to be looped-back via PUB/SUB back to the sender, matching the locally known TOPIC-filter maintained by the very same SUB-side entity )
b )in cases, where you cannot modify the PUB-side code, we still can setup a time-based counter, after expiring which, without receiving a single message ( be it using a loop of a known multiple of precisely timed-aSUB.poll( ... )-s, which allows for a few, priority-ordered interleaved control-loops to be operated without uncontrolled mutual blocking, or by using a straight, non-blocking form of aSUB.recv( zmq.NOBLOCK ) aligned within the loop with some busy-loop avoiding, CPU-relieving sleep()-s ). In case such timeout happens, having received no actual message so far, we can autonomously break the SUB-side loop, as requested above.
Q.E.D.
Anybody knows whether the Kafka clients allows sending and reading the content of a message in an asynchronous way.
I am currently using the Confluent.Kafka producers and consumers in C# which allows making an async call containing the whole message payload, however it would be interesting to publish the value of the message or content of several MBs asynchronously, and being able of reading it asynchronously as well, instead of just receiving the message in one shot.
using (var producer = new ProducerBuilder<string, string>(config).Build())
{
await producer.ProduceAsync(_topic, new Message<string, string> { Key = _file, Value = <pass async content here> });
}
Is anyway of achieving this?
Thanks
The producer needs to flush the event, send to the broker, which gets written to disk and (optionally) ack the entire record before consumers can read it.
If you'd like to stream chunks of files, then you should send them as binary, but you will need to chunk it yourself, and deal with potential ordering problems in the consumer (e.g. two clients are streaming the same filename, your key, at the same time, with interwoven values)
The recommendation for dealing with files (i.e. large binary content) is to not send them through Kafka, but rather upload them to a shared filesystem, then send the URI as a string through an event.
I run into a problem that googling seems can't solve. To keep it simple I have a client written in C# and a server running Linux written in C. Client is calling Send(buffer) in a loop 100 times. The problem is that server receives only a dozen of them. If I put a sleep, big enough, in a loop everything turns out fine. The buffer is small - about 30B. I read about Nagle's algorithm and ACK delay but it doesn't answer my problems.
for(int i = 0; i < 100; i++)
{
try
{
client.Send(oneBuffer, 0, oneBuffer.Length, SocketFlags.None)
}
catch (SocketException socE)
{
if ((socE.SocketErrorCode == SocketError.WouldBlock)
|| (socE.SocketErrorCode == SocketError.NoBufferSpaceAvailable)
|| (socE.SocketErrorCode == SocketError.IOPending))
{
Console.WriteLine("Never happens :(");
}
}
Thread.Sleep(100); //problem solver but why??
}
It's look like send buffer gets full and rejects data until it gets empty again, in blocking mode and nonblocking mode. Even better, I never get any exception!? I would expect some of the exceptions to raise but nothing. :( Any ideas? Thnx in advance.
TCP is stream oriented. This means that recv can read any amount of bytes between one and the total number of bytes outstanding (sent but not yet read). "Messages" do not exist. Sent buffers can be split or merged.
There is no way to get message behavior from TCP. There is no way to make recv read at least N bytes. Message semantics are constructed by the application protocol. Often, by using fixed-size messages or a length prefix. You can read at least N bytes by doing a read loop.
Remove that assumption from your code.
I think this issue is due to the nagle algorithm :
The Nagle algorithm is designed to reduce network traffic by causing
the socket to buffer small packets and then combine and send them in
one packet under certain circumstances. A TCP packet consists of 40
bytes of header plus the data being sent. When small packets of data
are sent with TCP, the overhead resulting from the TCP header can
become a significant part of the network traffic. On heavily loaded
networks, the congestion resulting from this overhead can result in
lost datagrams and retransmissions, as well as excessive propagation
time caused by congestion. The Nagle algorithm inhibits the sending of
new TCP segments when new outgoing data arrives from the user if any
previouslytransmitted data on the connection remains unacknowledged.
Calling client.Send function doesn't mean a TCP segment will be sent.
In your case, as buffers are small, the naggle algorithm will regroup them into larger segments. Check on server side that the dozen of buffers received contains the whole data.
When you add a Thread.Sleep(100), you will receive 100 packets on server side because nagle algotithm won't wait longer for further data.
If you really need a short latency in your application, you can explicitly disable nagle algorithm for your TcpClient : set the NoDelay property to true. Add this line at the begening of your code :
client.NoDelay = true;
I was naive thinking there was a problem with TCP stack. It was with my server code. Somewhere in between the data manipulation I used strncpy() function on a buffer that stores messages. Every message contained \0 at the end. Strncpy copied only the first message (the first string) out of the buffer regardless the count that was given (buffer length). That resulted in me thinking I had lost messages.
When I used the delay between send() calls on client, messages didn't get buffered. So, strncpy() worked on a buffer with one message and everything went smoothly. That "phenomenon" led we into thinking that speed rate of send calls is causing my problems.
Again thanks on help, your comments made me wonder. :)
This is my first question posted on this forum, and I'm a beginner in c# world , so this is kind of exciting for me, but I'm facing some issues with sending a large amount of data through sockets so this is more details about my problem:
I'm sending a binary image of 5 Mo through a TCP socket, at the receiving part I'm saving the result(data received ) and getting only 1.5 Mo ==> data has been lost (I compared the original and the resulting file and it showed me the missed parts)
this is the code I use:
private void senduimage_Click(object sender, EventArgs e)
{
if (!user.clientSocket_NewSocket.Connected)
{
Socket clientSocket_NewSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
user.clientSocket_NewSocket = clientSocket_NewSocket;
System.IAsyncResult _NewSocket = user.clientSocket_NewSocket.BeginConnect(ip_address, NewSocket.Transceiver_TCP_Port, null, null);
bool successNewSocket = _NewSocket.AsyncWaitHandle.WaitOne(2000, true);
}
byte[] outStream = System.Text.Encoding.ASCII.GetBytes(Uimage_Data);
user.clientSocket_NewSocket.Send(outStream);
}
In forums they say to divide data into chunks, is this a solution, if so how can I do this, I've tried but it didn't work!
There are lots of different solutions but chunking is usually a good solution, you can either do this blindly where you keep filling your temporary buffer and then putting it into some stateful buffer until you hit some arbitrary token or the buffer is not completely full, or you can adhere to some sort of contract per tcp message (a message being the overall data to recieve).
If you were to look at doing some sort of contract then do something like the first N bytes of a message is the descriptor, which you could make as big or as small as you want, but your temp buffer will ONLY read this size up front from the stream.
A typical header could be something like:
public struct StreamHeader // 5 bytes
{
public byte MessageType {get;set;} // 1 byte
public int MessageSize {get;set;} // 4 bytes
}
So you would read that in then if its small enough allocate the full message size to the temp buffer and read it all in, or if you deem it too big chunk it into sections and keep reading until the total bytes you have received match the MessageSize portion of your header structure.
Probably you haven't read the documentation on socket usage in C#. (http://msdn.microsoft.com/en-us/library/ms145160.aspx)
The internal buffer can not store all the data you provided to send methode. A possible solution to your problem can be is like you said to divide your data into chunks.
int totalBytesToSend = outstream.length; int bytesSend = 0;
while(bytesSend < totalBytesToSend )
bytesSend+= user.clientSocket_NewSocket.Send(outStream, bytesSend, totalBytesToSend - bytesSend,...);
I suspect that one of your problems is that you are not calling EndConnect. From the MSDN documentation:
The asynchronous BeginConnect operation must be completed by calling the EndConnect method.
Also, the wait:-
bool successNewSocket = _NewSocket.AsyncWaitHandle.WaitOne(2000, true);
is probably always false as there is nothing setting the event to the signaled state. Usually, you would specify a callback function to the BeginConnect function and in the callback you'd call EndConnect and set the state of the event to signaled. See the example code on this MSDN page.
UPDATE
I think I see another problem:-
byte[] outStream = System.Text.Encoding.ASCII.GetBytes(Uimage_Data);
I don't know what type Uimage_Data but I really don't think you want to convert it to ASCII. A zero in the data may signal an end of data byte (or maybe a 26 or someother ASCII code). The point is, the encoding process is likely to be changing the data.
Can you provide the type for the Uimage_Data object?
Most likely the problem is that you are closing the client-side socket before all the data has been transmitted to the server, and it is therefore getting discarded.
By default when you close a socket, all untransmitted data (sitting in the operating system buffers) is discarded. There are a few solutions:
[1] Set SO_LINGER (see http://developerweb.net/viewtopic.php?id=2982)
[2] Get the server to send an acknowledgement to the client, and don't close the client-side socket until you receive it.
[3] Wait until the output buffer is empty on the client side before closing the socket (test using getsocketopt SO_SND_BUF - I'm not sure of the syntax on c#).
Also you really should be testing the return value of Send(). Although in theory it should block until it sends all the data, I would want to actually verify that and at least print an error message if there is a mismatch.