WCF Compression with NetTcpBinding in .net 4.0 - c#

I finally managed to get compression working with NetTcpBinding in WCF in .net 4.0. But it seems quite nasty to me, so maybe someone else has a better idea.
Some general Information:
I know this is working with .net 4.5 out of the box - but we are stuck to 4.0
I found several examples with CustomBinding - but we want to stick with NetTcpBinding because of the binary enconding (it is simply faster than text enconding)
We are doing all the configuration in code (except server address), so for the customer it is just plug and play and no chance to change anything - but made it also sometimes difficult to get an example working (most are provided in config files)
My first approach was implementing a message dispatcher on client and server side which does the compression:
http://dotnetlombardia.org/b/tonyexpo/archive/2011/03/09/compressione-in-wcf.aspx
But whenever I changed (or replaced) the original message by any means, the AfterReceiveReply on client side was never executed.
Though in WCF traces I could see that the client received the message and did even send an ACK to the server which the server received! But the client went in timeout?!
Then I found the MessageEncoderFactory for compression by Microsoft:
http://msdn.microsoft.com/en-us/library/ms751458%28v=vs.110%29.aspx
And applied the bug fix provided here:
http://blogs.msdn.com/b/dmetzgar/archive/2011/03/14/gzipmessageencoder-part-two.aspx
Finally I did inherit the NetTcpBinding and applied the new message encoder in the CreateBindingElements function:
public class CompressedNetTcpBinding : NetTcpBinding
{
MyCompressionMessageEncodingBindingElement compressionEncoding;
public CompressedNetTcpBinding()
: base()
{
FieldInfo fi = typeof(NetTcpBinding).GetField("encoding", BindingFlags.Instance | BindingFlags.NonPublic);
BinaryMessageEncodingBindingElement binaryEncoding = (BinaryMessageEncodingBindingElement)fi.GetValue(this);
compressionEncoding = new MyCompressionMessageEncodingBindingElement(binaryEncoding, CompressionAlgorithm.Deflate);
}
/// <summary>
/// Exchange <see cref="BinaryMessageEncodingBindingElement"/> and use compressed encoding binding element.
/// </summary>
/// <returns>binding elements with compressed message binding element</returns>
public override BindingElementCollection CreateBindingElements()
{
BindingElementCollection bec = base.CreateBindingElements();
BinaryMessageEncodingBindingElement enc = null;
foreach (BindingElement be in bec)
{
if (be is BinaryMessageEncodingBindingElement)
{
enc = (BinaryMessageEncodingBindingElement)be;
break;
}
}
bec.Remove(enc);
bec.Insert(2, compressionEncoding);
return bec;
}
}
I did forward the ordinary BinaryMessageEncoder to the compression encoding, so when I change any settings of the NetTcpBinding, e.g. ReaderQuotas they are applied correctly.
I know the enconding member in NetTcpBinding is also used in the private function IsBindingElementsMatch but so far that did not cause any problems.
On my local machine, the (startup) performance penalty is insignificant (100ms - 250ms). But on LAN and WAN there is a significat performance increase (up to several seconds).
So what do you think:
Is that a (the) way to go?
Are there any better solutions?

Related

Why is reading from IBM MQ queue extremely slow using .NET standard library? (unlike .NET)

I wrote a console app on .NET, to read without consuming the messages from an IBM MQ queue.
Worked perfect.
Now, I need to migrate that app into .NET Core. Can't figure out why it is extremely slow.
How it works:
target framework .NET Core 3.1
IBMMQDotNetClient NuGet package installed
created a helper class, static, with a static constructor where I initialise MQEnvironment properties like so:
MQEnvironment.CertificateLabel = "ibmwebsphere"; // this is the friendlyname on mmc certificate
MQEnvironment.SSLKeyRepository = "*SYSTEM";
added a method called Init where I initialise connection to MQManager like so:
Hashtable properties = new Hashtable();
properties.Add(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES_MANAGED);
properties.Add(MQC.HOST_NAME_PROPERTY, hostName); // I read the hostName from a config file
properties.Add(MQC.PORT_PROPERTY, port); // I read the port from a config file
properties.Add(MQC.CHANNEL_PROPERTY, channelName); // I read the channel from a config file
properties.Add(MQC.SSL_CIPHER_SPEC_PROPERTY, cipherSpec); // I read the cipher spec from a config file, it's something like TLS_RSA_WITH_AES_256_CBC_SHA256
Then, I'm going to create a connection to the queue manager using the connection, and read messages one by one until coming to end of the queue.
var queueManager = new MQQueueManager(qm, properties); // I read the qm from a config file
var queue = queueManager.AccessQueue(queueName, MQC.MQOO_BROWSE + MQC.MQOO_FAIL_IF_QUIESCING); // I read the queueName from a config file
var mqGMO = new MQGetMessageOptions();
mqGMO.Options = MQC.MQGMO_FAIL_IF_QUIESCING + MQC.MQGMO_NO_WAIT + MQC.MQGMO_BROWSE_NEXT; mqGMO.MatchOptions = MQC.MQMO_NONE;
try {
while (true) {
MQMessage queueMessage = new MQMessage();
queue.Get(queueMessage, mqGMO); // code gets apparently stuck on this line,
// overprocessing, for many minutes until it gets to the next line,
// even though I mentioned "NO_WAIT" in the options.
// Note this only happens for .NET Core, but not in .NET framework.
var message = queueMessage.ReadString(queueMessage.MessageLength);
string fileName = message.Substring(0,3); // some processing here to extract some info from each message
}
}
catch(MQException ex)
{
if(err.ReasonCode.CompareTo(MQC.MQRC_NO_MSG_AVAILABLE) == 0)
{
// harmless exception to indicate there are no messages on the queue
}
}
catch(Exception ex)
{
Console.WriteLine(ex);
}
Of course it would be preferable to use a listener, not sure how to do that yet, it would be part of optimising, but for now - why is working so slow on line:
queue.Get(queueMessage, mqGMO); // but, again, as mentioned, only with the amqmdnetstd.dll (.NET Core), because if I use amqmdnet.dll (.NET framework), it works super fast, and it's supposed to be the other way around.
I do need to use .NET Standard/Core because I will run this in Linux, currently testing in Windows.
Don't use the MQEnvironment class as it is not threaded safe. Also, don't mix and match between MQEnvironment class and using MQ HashTable. Put your SSL/TLS information as a property in the MQ HashTable.
i.e.
properties.Add(MQC.SSL_PEER_NAME_PROPERTY, "ibmwebsphere");
properties.Add(MQC.SSL_CERT_STORE_PROPERTY, "*SYSTEM");
There isn't enough of your code to test to see why it might be failing.
I wrote and posted a blog item called: IBM MQ For .NET Core Primer. In the blog post, I included a fully functioning C# MQ example (MQTest62L.cs) that was built and run using .NET Core v3.1 and everything worked perfectly (see very bottom of post). Hence, I would suggest you follow my instructions, download, compile and run MQTest62L.cs to see if the issue is your code or MQ Client library.
Note: I was using Windows 10 Pro, IBM MQ v9.2.0.0 and .NET Core v3.1.415.

WCF - do not serialize (emit) empty collections

We are implementing an Client / Server application. Data are sent throughout the LAN. Where LAN means company network with several sites / locations.
We are using WCF and NetTcpBinding (EDIT: VS2010 .net 4.0).
I know that [DataMember(EmitDefaultValue = false)] is not recommended by Microsoft. But as mentionend above, data might be sent from one site to another. Therefore: size really matters!
Not sending the default value works most of the time fine. I have just an issue with collections of any kind. I do not want to transfer empty collections!
So I usually end up with to members of the same type (one for work, one work network) and I need to implement the methods OnSerializing and OnDeserialized.
[DataMember(EmitDefaultValue = false)]
private List<someType> data = new List<someType>();
[NonSerialized]
private List<someType> network = new List<someType>();
[OnDeserialized]
private void OnDeserialized(StreamingContext c)
{
if (network == null)
data = new List<someType>();
else
data = network;
}
[OnSerializing]
private void OnSerializing(StreamingContext c)
{
if (data.Count > 0)
network = data;
else
network = null;
}
Is there any elegant way to do that?
Or maybe even a completely different approach?
Remark: for simplicity I did not care about possible multi-threading issues.
But as mentionend above, data might be sent from one site to another.
Therfore: size really matters!
Do you really think that a few Bytes will make a big difference using NetTcpBinding in a LAN ? Did you made a load test to show that.
I know that [DataMember(EmitDefaultValue = false)] is not recommended
by Microsoft
It's not recommanded because it's not interoperable. This recomandation does not apply to your case as you have only WCF Clients/Server on a NetTcpBinding. The config already does not support interop (through java or php).
The WCF binary encoder (uned in NetTcpBinding) supports Gzip/Deflate compression since .net 4.5. You will gain more Bytes with this feature than removing empty collections.
Read more here.

Can I detect if content has been compressed in my HttpModule?

I have an HttpModule which is used to dynamically compress content from an ASP.NET (MVC3) web application. The approach is very similar to the CompressionModule in this article (where the module applies a GZip filter to the HttpResponse and sets the correct Content-encoding header).
For one reason and another, this needs to run in classic mode, not integrated pipeline mode.
The problem I've got, is that on some servers that have IIS compression enabled, IIS compresses the content and then my module compresses that.
The upshot is that I get content compressed twice, with an encoding:
Content-encoding: gzip,gzip
one from IIS, and one from this line in my code:
httpResponse.AppendHeader("Content-encoding", "gzip");
Does anyone know a way, in classic mode, that I can check to see if the content is already compressed, or if compression is enabled on the server, in order to bypass my own compression?
In pipeline mode, this check is as simple as
if (httpResponse.Headers["Content-encoding"]!= null)
{
return;
}
i.e. check if anything has already set a content-encoding and if so, do nothing.
However, I'm stumped in classic mode. Unfortunately, accessing HttpResponse.Headers is not allowed in classic mode, so I can't do my barrier check.
All ideas gratefully received.
Theoretically, you can use reflection to peek into HttpRequest._cacheHeaders field, where ASP.NET apparently stores all yet-to-be sent headers in classic mode:
if (this._wr is IIS7WorkerRequest)
{
this.Headers.Add(HttpResponseHeader.MaybeEncodeHeader(name), HttpResponseHeader.MaybeEncodeHeader(value));
}
else if (flag)
{
if (this._cacheHeaders == null)
{
this._cacheHeaders = new ArrayList();
}
this._cacheHeaders.Add(new HttpResponseHeader(knownResponseHeaderIndex, value));
}
I found a relatively easy way to check if the output is already compressed or not; my approach works even with IIS running in classic mode and although it may be considered a "hack" I found it to be working quite consistently; the idea is more or less the following
// checks if the response is already compressed
private bool IsResponseCompressed(HttpApplication app)
{
string filter = app.Response.Filter.ToString().ToLower();
if (filter.Contains("gzip") | filter.Contains("deflate"))
{
return true;
}
return false;
}
basically the code works by checking the response filter name; if the output stream is compressed the name contains "gzip" or "deflate" so it's easy to check for compression

protobuf-csharp-port

I'm using Jon Skeet's (excellent) port of Google's Protocol Buffers to C#/.Net.
For practice, I have written a dummy Instant Messenger app that sends some messages down a socket. I have a message definition as follows:-
message InstantMessage {<br/>
required string Message = 1;<br/>
required int64 TimeStampTicks = 2; <br/>
}
When the sender serialises the message, it sends it really elegantly:-
...
InstantMessage.Builder imBuild = new InstantMessage.Builder();
imBuild.Message = txtEnterText.Text;
imBuild.TimeStampTicks = DateTime.Now.Ticks;
InstantMessage im = imBuild.BuildPartial();
im.WriteTo(networkStream);
...
This works great. But at the other end, I'm having trouble getting the ParseFrom to work.
I want to use:-
InstantMessage im = InstantMessage.ParseFrom(networkStream);
But instead I have had to read it to bytes and then parse it from here. This is obviously not ideal for a number of reasons. Current code is:-
while (true)
{
Byte[] byteArray = new Byte[10000000];
int intMsgLength;
int runningMsgLength = 0;
DateTime start = DateTime.Now;
while (true)
{
runningMsgLength += networkStream.Read(byteArray, runningMsgLength, 10000000 - runningMsgLength);
if (!networkStream.DataAvailable)
break;
}
InstantMessage im = InstantMessage.ParseFrom(byteArray.Take(runningMsgLength).ToArray());
When I try to use ParseFrom, control does not return to the calling method even when I know a valid GB message is on the wire.
Any advice would be gratefully received,
PW
Sorry for taking a while to answer this. As Marc says, protocol buffers don't have a terminator, and they aren't length prefixed unless they're nested. However, you can put on the length prefix yourself. If you look at MessageStreamIterator and MessageStreamWriter, you'll see how I do this - basically I pretend that I'm in the middle of a message, writing a nested message as field 1. Unfortunately when reading the message, I have to use internal details (BuildImpl).
There's now another API to do this: IMessage.WriteDelimitedTo and IBuilder.MergeDelimitedFrom. This is probably what you want at the moment, but I seem to remember there's a slight issue with it in terms of detecting the end of the stream (i.e. when there isn't another message to read). I can't remember whether there's a fix for it at the moment - I have a feeling it's changed in the Java version and I may not have ported the change yet. Anyway, that's definitely the area to look at.
Protobuf has no terminator - so either close the stream, or use your own length prefix etc. Protobuf-net exposes this easily via SerializeWithLenghtPrefix / DeserializeWithLengthPrefix.
Simply: without this, it can't know where each message ends, so keeps trying to read to the end of the stream.

Using C# how to clean up MSMQ message format to work with C++ IXMLDOMDocument2

I'm trying to get a C++ service to load an XML document from a MSMQ message generated by C#. I can't really change the C++ side of things because I'm trying to inject test messages into the queue. The C++ service is using the following to load the XML.
CComPtr<IXMLDOMDocument2> spDOM;
CComPtr<IXMLDOMNode> spNode;
CComBSTR bstrVal;
if(_FAILED(hr = spDOM.CoCreateInstance(CLSID_DOMDocument30)))
{
g_infoLog->LogCOMError(hr, "CWorker::ProcessBody() Can't Create DOM");
pWork->m_nFailure = WORKFAIL_BADXML;
goto Exit;
}
hr = spDOM->loadXML(bstrBody, &vbResult);
The C# code to send the MSMQ message looks like this (just test code not pretty):
// open the queue
var mq = new MessageQueue(destinationQueue)
{
// store message on disk at all intermediaries
DefaultPropertiesToSend = { Recoverable = true },
// set the formatter to Binary, default is XML
Formatter = new BinaryMessageFormatter()
};
// send message
mq.Send(messageContent, "TestMessage");
mq.Close();
I tried to send the same message using BinaryMessageFormatter but it puts what I think are unicode characters at the top before the XML starts.
.....ÿÿÿ
ÿ.......
......À)
If I use the default XML formatter the message has the following top element. The C++ service doesn't seem to handle this.
<?xml version="1 .0"?>..<string>& lt;
Do you know of a way I could easily clean up the unicode characters when using the binary formatter? If so I think it might work.
Have you tried the ActiveXMessageFormatter? It might not compile with it as the formatter, i have no way to test here, but it might.
EDIT: just tried and it compiles ok, whether the result is any better i still couldn't say for sure.

Categories