I had using the BinaryFormatter to Serialize an object through NetworkStream
The code like this
//OpenConnection ...
TCPClient client = server.AcceptTCPConnection();
Message message = new Message("bla bla"); // This is the serializable class
NetworkStream stream = client.GetStream(); // Get Stream
BinaryFormatter bf = new BinaryFormatter();
bf.Serialize(stream, message);
stream.Flush();
stream.Close(); //Close Connection
And in client Code, we just need to Read from stream
bf.Deserialize(stream) as Message
to get the object we just sent from Server.
But there is a problem here, if I delete the line stream.Close(); the client cannot read this Object. Or I can change to stream.Dispose();
However, I want to use this stream again to send another Message, how I can do? Please help, it make me feel so headache ##
UPDATE:
I found the reason of this issue. Because I used one machine to run both client and server. It definitely worked well in two different machines. Someone can tell me why? Get big problem with this for a couple day ago.
Sending multiple separate messages involves "framing" - splitting the single channel into separate chunks that don't ever require the client to "read to end". Oddly, though, I was under the impression that BinaryFormatter already implemented basic framing - but: I could be wrong. In the general case, when working with a binary protocol, the most common approach is to prefix each message with the length of the payload, i.e.
using(var ms = new MemoryStream()) {
while(...)
{
// not shown: serialize to ms
var len BitConverter.GetBytes((int)ms.Length);
output.Write(len, 0, 4);
output.Write(ms.GetBuffer(), 0, (int) ms.Length);
ms.SetLength(0); // ready for next cycle
}
}
the caller has to:
read exactly 4 bytes (at least, for the above), or detect EOF
determine the length
read exactly that many bytes
deserialize
repeat
If that sounds like a lot of work, maybe just use a serializer that does all this for you; for example, with protobuf-net, this would be:
while(...) { // each item
Serializer.SerializeWithLengthPrefix(output, PrefixStyle.Base128, 1);
}
and the reader would be:
foreach(var msg in Serializer.DeserializeItems<Message>(
input, PrefixStyle.Base128, 1))
{
// ...
}
(note: this does not use the same format / rules as BinaryFormatter)
Related
I have a TcpClient class on a client and server setup on my local machine. I have been using the Network stream to facilitate communications back and forth between the 2 successfully.
Moving forward I am trying to implement compression in the communications. I've tried GZipStream and DeflateStream. I have decided to focus on DeflateStream. However, the connection is hanging without reading data now.
I have tried 4 different implementations that have all failed due to the Server side not reading the incoming data and the connection timing out. I will focus on the two implementations I have tried most recently and to my knowledge should work.
The client is broken down to this request: There are 2 separate implementations, one with streamwriter one without.
textToSend = ENQUIRY + START_OF_TEXT + textToSend + END_OF_TEXT;
// Send XML Request
byte[] request = Encoding.UTF8.GetBytes(textToSend);
using (DeflateStream streamOut = new DeflateStream(netStream, CompressionMode.Compress, true))
{
//using (StreamWriter sw = new StreamWriter(streamOut))
//{
// sw.Write(textToSend);
// sw.Flush();
streamOut.Write(request, 0, request.Length);
streamOut.Flush();
//}
}
The server receives the request and I do
1.) a quick read of the first character then if it matches what I expect
2.) I continue reading the rest.
The first read works correctly and if I want to read the whole stream it is all there. However I only want to read the first character and evaluate it then continue in my LongReadStream method.
When I try to continue reading the stream there is no data to be read. I am guessing that the data is being lost during the first read but I'm not sure how to determine that. All this code works correctly when I use the normal NetworkStream.
Here is the server side code.
private void ProcessRequests()
{
// This method reads the first byte of data correctly and if I want to
// I can read the entire request here. However, I want to leave
// all that data until I want it below in my LongReadStream method.
if (QuickReadStream(_netStream, receiveBuffer, 1) != ENQUIRY)
{
// Invalid Request, close connection
clientIsFinished = true;
_client.Client.Disconnect(true);
_client.Close();
return;
}
while (!clientIsFinished) // Keep reading text until client sends END_TRANSMISSION
{
// Inside this method there is no data and the connection times out waiting for data
receiveText = LongReadStream(_netStream, _client);
// Continue talking with Client...
}
_client.Client.Shutdown(SocketShutdown.Both);
_client.Client.Disconnect(true);
_client.Close();
}
private string LongReadStream(NetworkStream stream, TcpClient c)
{
bool foundEOT = false;
StringBuilder sbFullText = new StringBuilder();
int readLength, totalBytesRead = 0;
string currentReadText;
c.ReceiveBufferSize = DEFAULT_BUFFERSIZE * 100;
byte[] bigReadBuffer = new byte[c.ReceiveBufferSize];
while (!foundEOT)
{
using (var decompressStream = new DeflateStream(stream, CompressionMode.Decompress, true))
{
//using (StreamReader sr = new StreamReader(decompressStream))
//{
//currentReadText = sr.ReadToEnd();
//}
readLength = decompressStream.Read(bigReadBuffer, 0, c.ReceiveBufferSize);
currentReadText = Encoding.UTF8.GetString(bigReadBuffer, 0, readLength);
totalBytesRead += readLength;
}
sbFullText.Append(currentReadText);
if (currentReadText.EndsWith(END_OF_TEXT))
{
foundEOT = true;
sbFullText.Length = sbFullText.Length - 1;
}
else
{
sbFullText.Append(currentReadText);
}
// Validate data code removed for simplicity
}
c.ReceiveBufferSize = DEFAULT_BUFFERSIZE;
c.ReceiveTimeout = timeOutMilliseconds;
return sbFullText.ToString();
}
private string QuickReadStream(NetworkStream stream, byte[] receiveBuffer, int receiveBufferSize)
{
using (DeflateStream zippy = new DeflateStream(stream, CompressionMode.Decompress, true))
{
int bytesIn = zippy.Read(receiveBuffer, 0, receiveBufferSize);
var returnValue = Encoding.UTF8.GetString(receiveBuffer, 0, bytesIn);
return returnValue;
}
}
EDIT
NetworkStream has an underlying Socket property which has an Available property. MSDN says this about the available property.
Gets the amount of data that has been received from the network and is
available to be read.
Before the call below Available is 77. After reading 1 byte the value is 0.
//receiveBufferSize = 1
int bytesIn = zippy.Read(receiveBuffer, 0, receiveBufferSize);
There doesn't seem to be any documentation about DeflateStream consuming the whole underlying stream and I don't know why it would do such a thing when there are explicit calls to be made to read specific numbers of bytes.
Does anyone know why this happens or if there is a way to preserve the underlying data for a future read? Based on this 'feature' and a previous article that I read stating a DeflateStream must be closed to finish sending (flush won't work) it seems DeflateStreams may be limited in their use for networking especially if one wishes to counter DOS attacks by testing incoming data before accepting a full stream.
The basic flaw I can think of looking at your code is a possible misunderstanding of how network stream and compression works.
I think your code might work, if you kept working with one DeflateStream. However, you use one in your quick read and then you create another one.
I will try to explain my reasoning on an example. Assume you have 8 bytes of original data to be sent over the network in a compressed way. Now let's assume for sake of an argument, that each and every byte (8 bits) of original data will be compressed to 6 bits in compressed form. Now let's see what your code does to this.
From the network stream, you can't read less than 1 byte. You can't take 1 bit only. You take 1 byte, 2 bytes, or any number of bytes, but not bits.
But if you want to receive just 1 byte of the original data, you need to read first whole byte of compressed data. However, there is only 6 bits of compressed data that represent the first byte of uncompressed data. The last 2 bits of the first byte are there for the second byte of original data.
Now if you cut the stream there, what is left is 5 bytes in the network stream that do not make any sense and can't be uncompressed.
The deflate algorithm is more complex than that and thus it makes perfect sense if it does not allow you to stop reading from the NetworkStream at one point and continue with new DeflateStream from the middle. There is a context of the decompression that must be present in order to decompress the data to their original form. Once you dispose the first DeflateStream in your quick read, this context is gone, you can't continue.
So, to resolve your issue, try to create only one DeflateStream and pass it to your functions, then dispose it.
This is broken in many ways.
You are assuming that a read call will read the exact number of bytes you want. It might read everything in one byte chunks though.
DeflateStream has an internal buffer. It can't be any other way: Input bytes do not correspond 1:1 to output bytes. There must be some internal buffering. You must use one such stream.
Same issue with UTF-8: UTF-8 encoded strings cannot be split at byte boundaries. Sometimes, your Unicode data will be garbled.
Don't touch ReceiveBufferSize, it does not help in any way.
You cannot reliably flush a deflate stream, I think, because the output might be at a partial byte position. You probably should devise a message framing format in which you prepend the compressed length as an uncompressed integer. Then, send the compressed deflate stream after the length. This is decodable in a reliable way.
Fixing these issues is not easy.
Since you seem to control client and server you should discard all of this and not devise your own network protocol. Use a higher-level mechanism such as web services, HTTP, protobuf. Anything is better than what you have there.
Basically there are a few things wrong with the code I posted above. First is that when I read data I'm not doing anything to make sure the data is ALL being read in. As per microsoft documentation
The Read operation reads as much data as is available, up to the
number of bytes specified by the size parameter.
In my case I was not making sure my reads would get all the data I expected.
This can be accomplished simply with this code.
byte[] data= new byte[packageSize];
bytesRead = _netStream.Read(data, 0, packageSize);
while (bytesRead < packageSize)
bytesRead += _netStream.Read(data, bytesRead, packageSize - bytesRead);
On top of this problem I had a fundamental issue with using DeflateStream - namely I should not use DeflateStream to write to the underlying NetworkStream. The correct approach is to first use the DeflateStream to compress data into a ByteArray, then send that ByteArray using the NetworkStream directly.
Using this approach helped to correctly compress data over the network and property read the data on the other end.
You may point out that I must know the size of the data, and that is true. Every call has a 8 byte 'header' that includes the size of the compressed data and the size of the data when it is uncompressed. Although I think the second was utimately not needed.
The code for this is here. Note the variable compressedSize serves 2 purposes.
int packageSize = streamIn.Read(sizeOfDataInBytes, 0, 4);
while (packageSize!= 4)
{
packageSize+= streamIn.Read(sizeOfDataInBytes, packageSize, 4 - packageSize);
}
packageSize= BitConverter.ToInt32(sizeOfDataInBytes, 0);
With this information I can correctly use the code I showed you first to get the contents fully.
Once I have the full compressed byte array I can get the incoming data like so:
var output = new MemoryStream();
using (var stream = new MemoryStream(bufferIn))
{
using (var decompress = new DeflateStream(stream, CompressionMode.Decompress))
{
decompress.CopyTo(output);;
}
}
output.Position = 0;
var unCompressedArray = output.ToArray();
output.Close();
output.Dispose();
return Encoding.UTF8.GetString(unCompressedArray);
I try to serialize list of 15 objects this way:
XmlSerializer xmlSerializer = new XmlSerializer(typeof(List<Employee>));
MemoryStream memStream = new MemoryStream();
// Serialize
xmlSerializer.Serialize(stream, allKnownWorkers);
memStream.Position = 0;
data = memStream.GetBuffer();
Console.WriteLine("Transmitting.....");
stream.Write(data, 0, data.Length); // NetworkStream
Deserialization looks like:
// Read the first batch of the TcpServer response bytes.
Int32 bytes = stream.Read(data, 0, data.Length);
XmlSerializer xmlSerializer = new XmlSerializer(typeof(List<Employee>));
MemoryStream memStream = new MemoryStream();
memStream.Write(data, 0, bytes);
memStream.Position = 0;
// Deserialize
workers.AddRange((List<Employee>)xmlSerializer.Deserialize(memStream));
I get an exception in last line of deserialization: Unexpected end of file has occurred. The following elements are not closed: ... When I send list with only few objects it works correctly. I suppose there is a problem with stream buffer length. How can I fix it?
Thank you very much!
You are treating your stream-based network connection as if it were message based. But it's not. So you can't count on a single read from the stream returning all of the data for a single object, or even a single transmission.
Instead, you need to design into your protocol a way to know when you've read all the data for a unit of processing (whatever that happens to be in the given context...here that seems to be an XML document).
There are lots of ways to accomplish this. I would say the two most straightforward would be to either transmit a byte count first, before the XML data, so that the receiver knows how many bytes to read before the try to read the XML, or to simply build the XML parsing into the stream reading.
On that latter point, you might try just handing the network stream to your XmlSerializer. I don't recall off the top of my head how well it will handle this, but it could work as long as the XmlSerializer stops reading once it's got a complete XML document, instead of trying to read all the way to the end of the stream. But even if XmlSerializer doesn't just give it to you for free, it should not be too hard to detect the opening tag for the XML document's root element and then just keep reading data until you read the closing tag.
I'm trying to send a broker message to a service bus and I want the message to be a list of multiple types. I've tried using interfaces as well as objects and it works fine until I add more than one type to the list. I'm read several posts and online articles about doing something similar and they all seem to be specific to doing manual xml seralization or using WCF. In this case the seralization is happening automatically.
My code is like so:
Queue<Object> x = new Queue<Object>();
x.Enqueue(new VRequest());
x.Enqueue(new PRequest());
ServiceBus.TrackerClient.SendAsync(new BrokeredMessage(x) { ContentType = "BulkRequest" });
Then my broker message handler (where a seralization error occurs):
var bulk = message.GetBody<Queue<Object>>();
Any ideas on how I can send a single broker message with multiple types?
To anyone interested you can use a binary formatter and memory stream to accomplish this. It's super flexible since you are working binary data... You can even use interfaces, etc. You will need to convert to a byte array once you have the memory stream so you can send it over the network. Then you are able to deserialize it on the other end. Also make sure you mark your objects are serializable.
BinaryFormatter formatter = new BinaryFormatter();
MemoryStream stream = new MemoryStream();
Queue<IYodas> q = new Queue<IYodas>();
q.Enqueue(new Yoda());
q.Enqueue(new Yoda2());
formatter.Serialize(stream, q);
Byte[] package = stream.ToArray();
// Send broker message using package as the object to send
....
// Then on the other end (you will need a byte array to object function)
Queue<IYodas> result = (Queue<IYodas>)ByteArrayToObject(package);
I have a pair of C# client-server programs that communicate using a networkstream.
Everything works fine as it is without compression.
Now I'd like to get the bandwidth-usage down, so I want to use a compressing wrapperstream around my networkstream.
I have tried SharpZipLib, DotNetZip, C#'s own GZipStream - but I can get none of them to work.
SharpZipLib has problems flushing, and applying the fix specified here: http://community.sharpdevelop.net/forums/p/7855/22139.aspx results in an exception "Header checksum illegal".
Using DotNetZip's DeflateStream results in a ZLibException("Bad state (invalid stored block lengths)");
GZipStream gives me a System.IO.InvalidDataException stating "The magic number in GZip header is not correct. Make sure you are passing in a GZip stream.".
The way I've implemented it is that everytime an array of byte has to be sent by my framework, I create a new Compression stream wrapper around the existing networkstream, write the bytes to the compression stream, and then flush, close & dispose it.
This to make sure that each WriteMessage(byte[] blah) uses it's own state-independent compressionstream that will be flushed immediately.
I've taken care to not let any of the streams close the original network stream.
using (System.IO.Stream outputStream = CreateOutputStreamWrapper(_networkStream))
{
outputStream.Write(messageBytes, 0, messageBytes.Length);
outputStream.Flush();
outputStream.Close();
outputStream.Dispose();
}
Basicly, my DecompressionStream is created as follows (optionals commented out)
protected System.IO.Stream CreateInputStreamWrapper(System.IO.Stream inInputStream)
{
//return new DeflateStream(inInputStream, CompressionMode.Decompress, true);
//return new BZip2InputStream(inInputStream, true);
return new GZipStream(inInputStream, System.IO.Compression.CompressionMode.Decompress, true);
}
and started as
_inputStream.BeginRead(_buffer, 0, _buffer.Length, new AsyncCallback(ReceiveCallback), null);
then in the ReceiveCallback, the data is read, the stream is flushed, closed and disposed:
//Get received bytes count
var bytesRead = _inputStream.EndRead(ar);
_inputStream.Flush();
_inputStream.Close();
_inputStream.Dispose();
and immediately create a new inputStream by calling CreateInputStreamWrapper again.
So what's going on ?
Since all compression-stream implementations are failing with errors that come down to "there's an error in the datastream" I have a hunch it must be me and my code.
On the other hand, if I remove the compression and just use the networkstream there's no problem, which makes me think the problem must lie with the compression-code.
Does this sound familiar to anyone ?
And while we're at it, does anyone know of any (other) compression stream implementations that are suited to wrap around a networkstream ?
Just in case anyone else ever reads this, DotNetZip's ZLib streams have a FlushMode flag that enables you to set up flushing compatible for networking stuff ('Sync' and 'Full' modes).
I am working on C# sockets and using XMLSerializer to send and receive data.
The XML data are sent from a server to a client over a network connection using TCP/IP protocol. The XML.Serializer.Serialize(stream) serializes the XML data and send them over the socket connection but when I want to use the XMLSerializer.Deserialize(stream) to read. The sent data returns a xml parse error.
Here is how I'm serializing:
Memory Stream ms = new MemoryStream();
FrameClass frame= new FrameClass ();
frame.string1 = "hello";
frame.string2 = "world";
XmlSerializer xmlSerializer = new XmlSerializer(frame.GetType());
xmlSerializer.Serialize(ms, frame);
socket.Send(ms.GetBuffer(), (int)ms.Length, SocketFlags.None);
Deserializing:
FrameClass frame;
XmlSerializer xml = new XmlSerializer(typeof(FrameClass));
frame= (FrameClass)xml.Deserialize(new MemoryStream(sockCom.SocketBuffer));
listbox1.Items.Add(frame.string1);
listbox2.Items.Add(frame.string2);
I think it has something to do with sending the data one right after another.
Can anyone teach me how to do this properly?
Have you received all of the data before attempting to deserialize (it's not clear from your code). I'd be inclined to receive all of the data into a local string and the deserialize from that rather than attempting to directly deserialize from the socket. It would also allow you to actually look at the data in the debugger before deserializing it.
Try this:
using (MemoryStream ms = new MemoryStream())
{
FrameClass frame= new FrameClass ();
frame.string1 = "hello";
frame.string2 = "world";
XmlSerializer xmlSerializer = new XmlSerializer(frame.GetType());
xmlSerializer.Serialize(ms, frame);
ms.Flush();
socket.Send(ms.GetBuffer(), (int)ms.Length, SocketFlags.None);
}
If you're sending the Frame XML one right after the other, then you're not sending an XML document. The XML Serializer will attempt to deserialize your entire document!
I don't have time to research this now, but look into the XmlReaderSettings property for reading XML fragments. You would then create an XmlReader over the memorystream with those settings, and call it in a loop.
The important thing is to flush the stream. It's also useful to put the stream in a using block to ensure it's cleaned up quickly.
Besides what #John said about the Flush call, your code looks alright.
You say you're sending multiple FrameClass data pieces, then the code should work sending just a single piece of data.
If you need to send multiple data objects, then you cannot send them all in one go, otherwise the deserialization process will stumble over the data.
You could setup some communication between the server & the client so the server knows what it's getting.
client: I have some data
Server: ok I'm ready, send it
client: sends
Server: done processing
repeat process...