Currently I'm having a NetTcpBinding with the default Buffered Transport Mode. I'm trying to determine what the best solution is for my request that builds a class that ends up being a large amount of data.
Basically I have a request that comes into WCF to grab a bunch of reporting information and return it to the client. As long as this is under the 64k MaxMessageRecievedSize it is fine, but if it goes over then I end up having an exception.
Should I switch to a Streaming TransferMode and then stream a file back to the client instead since the data could be small or large amounts? Or is it ok to increase the MaxMessageRecievedSize? It seems like a bad idea to increase the MaxMessageRecievedSize because if we have multiple connections to the service all fetching large amounts of data we could potentially have memory issues?
Any thoughts on how I could go about achieving this?
For the sizes you mention, you should switch to streaming. There's a detailed document that discusses this at http://msdn.microsoft.com/en-us/library/ms733742.aspx
Related
I would like to ask, does TransferMode = Streamed have any influence if operation does not return or take arguments of type Stream?
If yes, how client can possibly start to process for example XML serialized class if it's not completely delivered?
As to the first question, I think you'd benefit from one of those CodeProject examples that shows how to implement streaming over WCF. Just switching the TransferMode=Stream does not make streaming happen. If you don't have your code written for streaming (in .NET, implementing a FileStream to send your data), you'll still be buffering your payloads from one spot to another. Here's a link to a relatively simple version: http://bartwullems.blogspot.de/2011/01/streaming-files-over-wcf.html
As far as the client response is concerned, from my experience, the client doesn't start processing the streaming content until the content is fully delivered, so there's no chance of you processing a half-full XML file by mistake.
I've checked the net and so far I couldn't find what exactly i'm looking for. Here is my question:
let assume that we have a wcf service which returns a simple text or even an object( a class ).if we have increased the maxreadsize, Quotas sizes and etc to maximum values to avoid couple of errors, and the size of the return values be more than the "speed" of net connection ,will wcf manage to transfer the entire return value on its own?
example:
speed is 20kbps and the return value,is a class whose size is 30 kb ..will wcf transfer it in 2 sec or something ?(assuming timeout values are also set to right amounts).
im pretty confused on this, please guide me.
Well it goes as fast as it goes.
If your payload is actually 30.000 bits (and not bytes) you'll get it over the wire in 2 seconds on a good day.
But of anything disturbs the session, like another application also using bandwidth or you have packet loss, big latency or problem with the connection you will fail.
This is not specific to WCF but to all network communication.
Depending on the WCF configuration you may have more or less serialization overhead that can make the transmitted size larger than the "raw" data size.
The only way to know for sure is to make extensive testing. You can use Fiddler with a plugin or another tool to simulate slow network connections.
I need to transfer 1 GB using a web service. I think to transfer piecewise using msmq. Maybe there is a way to take it easy?
If you CAN break the data up in smaller chunks, then do. Web services aren't designed to transport that much data in one go, so even though it's possible, it's gonna be a bumpy ride.
But the world doesn't work in an efficient way, so here's what you do:
write the data as binary to a local file.
2. Create a streamwriter that writes to your webservice using a streamreader to read from the file.
3. If anything happens, catch the exception and try to resume from where your file pointer is.
4. If you can modify the webservice, have it read the data and write to a binary file, catching any errors and trying to write any new data on resume to the file at the current pointer.
The trick is going to be to figure out how to tell the service you're trying to resume an interrupted request.
If this isn't clear, I'll try to expand some more.
I need to transfer 1 GB using a web service. I think to transfer piecewise using msmq.
I want to transport people with a car. I Think of using a plane.
Get it? Either web service, or MSMQ. They do not magically mix.
THAT SAID: Web service, alrge data = bad idea. Even JSON has overhead. STreaming, non streaming? That is a LOT of open variables, and in most cases the web service here makes relatively little sense.
Up (sent to service) or down (to the service)? More questions - I would not really want a 1gb upload to a web service.
If you have to, splice the data and make an api to ask for all "parts" and then get part by part - that also allows a progress bar to be shown. Your software MUST handle re-requests for parts due to failures which MAY happen in transit.
I would seriously consider not using a web service here if the data is binary and just go with a REST api, at least for downloads. Likely also for uploads. Lots depends on all the stuff you did not even know how to ask for or did not bother to describe.
You can make some service to first creat buffer in destination next split data and send it through service, then finalize it.
I want to sending by webservice/ WCF big files like 2gb psd.
Is WCF message streaming the best way to cope with this ??
Odds are, a client-server design which exchanges huge amounts of data like 2GB files indicates a problem with the design. Consider these alternatives:
Don't send 2GB across the wire, you'll tie up the client during the upload, you might lose the file in transit, etc etc etc. Maybe send a URL to your service instead, so the service can download the file and handle any problems it encounters on the server side.
For huge amounts of data, client-server might be a totally inappropriate way to process your data. You might be better moving processing to the client side instead of the server.
I would use something like msmq transport if you want to send something that large via WCF, that way you can ensure delivery.
We are using a WCF service layer to return images from a repository. Some of the images are color, multi-page, nearly all are TIFF format. We experience slowness - one of many issues.
1.) What experiences have you had with returning images via WCF
2.) Do you have any suggestions tips for returning large images?
3.) All messages are serialized via SOAP correct?
4.) Does wcf do a poor job of compressing the large tiff files?
Thanks all!
Okay Just to second the responses by ZombieSheep and Seba Gomez, you should definitely look at streaming your data. By doing so you could seamlessly integrate the GZipStream into the process. On the client side you can reverse the compression process and convert the stream back to your desired image.
By using streaming there is a select number of classes that can be used as parameters/return types and you do need to modify your bindings throughout.
Here is the MSDN site on enabling streaming. This is the MSDN page that describes the restrictions on streaming contracts.
I assume you are also controlling the client side code, this might be really hard if you aren't. I have only used streaming when I had control of both the server and client.
Good luck.
If you are using another .Net assembly as your client, you can use two methodologies for returning large chunks of data, streaming or MTOM.
Streaming will allow you to pass a TIFF image as if it were a normal file stream on the local filesystem. See here for more details on the choices and their pros and cons.
Unfortunately, you're still going to have to transfer a large block of data, and I can't see any way around that, considering the points already raised.
I just wanted to add that it is pretty important to make sure your data is being streamed instead of buffered.
I read somewhere that even if you set transferMode to 'Streamed' if you aren't working with either a Stream itself, a Message or an implementation of IXmlSerializable, the message is not streamed.
Make sure you keep that in mind.
What bindings are you using? WCF will have some overheads, but if you use basic-http with MTOM you lose most of the base-64 overead. You'll still have the headers etc.
Another option would be to (wait for it...) not use WCF here - perhaps just a handler (ashx etc) that returns the binary.
Re compression - WCF itself won't have much hand in compression; the transport might, especially via IIS etc with gzip enabled - however, images are notorious for being hard to compress.
In a previous project I worked we had a similar issue. We had a Web Service in C# that received requests for medias. A media can range from files to images and was stored in a database using BLOB columns. Initially the web method that handled media retrieval requests read the chunk from the BLOB and returned in to the caller. This was one round trip to the server. The problem with this approach is that the client has no feedback of the progress of the operation.
There is no problem in computer
science that cannot be solved by an
extra level of indirection.
We started by refactoring the method in three methods.
Method1 setup the conversation between caller and the web service. This includes information about the request (like media Id) and capabilities exchange. The web service responded with a ticked Id which is used for the caller for future requests. This initial call is used for resource allocation.
Method2 is called consecutively until there is more that to be retrieved for the media. The call includes information about the current offset and the ticked Id that was provided when Method1 was called. The return updates the current position.
Method3 is called to finish request when Method2 reports that the reading of the request media has completed. This frees allocated resources.
This approach is practical because you can give immediate feedback to the user about the progress of the operation. You have a bonus that is to split the requests to Method2 in different threads. The progress than can be reported by chunk as some BitTorrent clients do.
Depending on the size of the BLOB you can choose to load it from the database on one go or reading it also by chunks. This means that you could use a balanced mechanism that based on a given watermark (BLOB size) chooses to load it in one go or by chunks.
If there is still a performance issue consider packaging the results using GZipStream or read about message encoders and specifically pay attention to the binary and Message Transmission Optimization Mechanism (MTOM).