I need to transfer 1 GB using a web service. I think to transfer piecewise using msmq. Maybe there is a way to take it easy?
If you CAN break the data up in smaller chunks, then do. Web services aren't designed to transport that much data in one go, so even though it's possible, it's gonna be a bumpy ride.
But the world doesn't work in an efficient way, so here's what you do:
write the data as binary to a local file.
2. Create a streamwriter that writes to your webservice using a streamreader to read from the file.
3. If anything happens, catch the exception and try to resume from where your file pointer is.
4. If you can modify the webservice, have it read the data and write to a binary file, catching any errors and trying to write any new data on resume to the file at the current pointer.
The trick is going to be to figure out how to tell the service you're trying to resume an interrupted request.
If this isn't clear, I'll try to expand some more.
I need to transfer 1 GB using a web service. I think to transfer piecewise using msmq.
I want to transport people with a car. I Think of using a plane.
Get it? Either web service, or MSMQ. They do not magically mix.
THAT SAID: Web service, alrge data = bad idea. Even JSON has overhead. STreaming, non streaming? That is a LOT of open variables, and in most cases the web service here makes relatively little sense.
Up (sent to service) or down (to the service)? More questions - I would not really want a 1gb upload to a web service.
If you have to, splice the data and make an api to ask for all "parts" and then get part by part - that also allows a progress bar to be shown. Your software MUST handle re-requests for parts due to failures which MAY happen in transit.
I would seriously consider not using a web service here if the data is binary and just go with a REST api, at least for downloads. Likely also for uploads. Lots depends on all the stuff you did not even know how to ask for or did not bother to describe.
You can make some service to first creat buffer in destination next split data and send it through service, then finalize it.
Related
Need some help figuring out what I am looking for. Basically, I need a service in which the Server dumps a bunch of XML into a stream (over a period of time) and every time the dump occurs N number of clients read the dump.
Example: Every time one of a 1000 stocks goes up by 5 cents, the service dumps some XML into a stream. The connecting applications grab the information from the stream.
I don't think the connection will ever close, as there needs to be something reading the stream for new data.
This needs to adhere to WCF REST standards, is there something out there that I'm looking for? In the end, it's just a non-stop stream of data.
Update: Looks like the service needs to be a multi-part/mixed content type.
An application I'm working on has a similar architecture, and I'm planning to use SignalR to push updates to clients, using long-polling techniques. I haven't implemented it yet, so I can't swear it will work for you, but their documentation seems promising: Update: I have implemented this now, and it works very well.
Pushing data from the server to the client (not just browser clients)
has always been a tough problem. SignalR makes it dead easy and
handles all the heavy lifting for you.
Scott Hansleman has a good blog on the subject and there is a useful article (involving WCF, REST, and SignalR) here: http://www.codeproject.com/Articles/324841/EventBroker
Instead of using WCF, have you look into ASP.NET MVC WebAPI?
For more information about using PushStreamContent in WebAPI, Henrik has a nice blog with example (under the heading 'Push Content').
Have you considered archived Atom feeds? They are 100% RESTful (hypermedia controls and all) and most importantly, they are very scalable.
Specifically, the archive documents never change, so you can set a cache expiry of 1 year or more. The subscription document is where all the newest events go and is constantly changing, but with the appropriate HTTP caching headers, you can make so you return 304 Not Modified if nothing has changed between each client request. Also, if you service has a natural time resolution, you can set the max-age to take advantage of that. For instance, if you data has a 20min resolution, you could include the following header in the subscription document response:
Cache-Control: max-age=1200
that way you can let you caches do most of the heaving lifting and the clients can poll the subscription document as often as they like, without bringing your service to it's knees.
Im not even sure how to ask this question, but i'll give it a shot.
I have a program in c# which reads in values from sensors on a manufacturing line that are indicative of the line health. These values update every 500 milisecconds. I have four lines that this is done for. I would like to write a "overview" program which will be able to access these values over the network to give a good summary on how the factory is doing. My question is how do I get the values from the c# programs on the line to the c# overview program realtime?
If my question doesnt make much sense, let me know and I'll try to rephrase it.
Thanks!
You have several options:
MSMQ
Write the messages in MSMQ (Microsoft Message Queuing). This is an (optionally) persistent and fast store for transporting messages between machines.
Since you say that you need the messages in the other app in near realtime, then it makes sense to use MSMQ because you do not want to write logic in that app for handling large amounts of incoming messages.
Keep the MSMQ in the middle and take out what you need and most importantly when you can.
WCF
The other app could expose a WCF service which can be called by your realtime app each time there's data available. The endpoint could be over net.tcp, meaning low overhead, especially if you send small messages.
Other options include what has been said before: database, file, etc. So you can make your choice between a wide variety of options.
It depends on a number of things, I would say. First of all, is it just the last value of each line that is interesting for the 'overview' application or do you need multiple values to determine line health or do you perhaps want to have a history of values?
If you're only interested in the last value, I would directly communicate this value to the overview app. As suggested by others, you have numerous possibilities here:
Raw TCP using TcpClient (may be a bit too low-level).
Expose a http endpoint on the overview application (maybe it's a web application) and post new values to this endpoint.
Use WCF to expose some endpoint (named pipes, net.tcp, http, etc.) on the overview application and call this endpoint from each client application.
Use MSMQ to have each client enqueue messages that are then picked up by the overview app (also directly supported by WCF).
If you need some history of values or you need multiple values to determine line health, I would go with a database solution. Then again you have to choose: does each client write to the database or does each client post to the overview app (using any of the communication means described above) and does the overview app write to the database.
Without knowing any more constraints for your situation, it's hard to decide between any of these.
You can use named pipes (see http://msdn.microsoft.com/en-us/library/bb546085.aspx) to have a fast way to communicate between two processes.
A database. Put your values into a database and the other app then pulls them out that same database. This is a very common solution to this problem and opens up worlds of new scenarios.
see: Relation database
I want to sending by webservice/ WCF big files like 2gb psd.
Is WCF message streaming the best way to cope with this ??
Odds are, a client-server design which exchanges huge amounts of data like 2GB files indicates a problem with the design. Consider these alternatives:
Don't send 2GB across the wire, you'll tie up the client during the upload, you might lose the file in transit, etc etc etc. Maybe send a URL to your service instead, so the service can download the file and handle any problems it encounters on the server side.
For huge amounts of data, client-server might be a totally inappropriate way to process your data. You might be better moving processing to the client side instead of the server.
I would use something like msmq transport if you want to send something that large via WCF, that way you can ensure delivery.
I'm looking for a way to pause or resume an upload process via C#'s WebClient.
pseudocode:
WebClient Client = new WebClient();
Client.UploadFileAsync(new Uri("http://mysite.com/receiver.php"), "POST", "C:\MyFile.jpg");
Maybe something like..
Client.Pause();
any idea?
WebClient doesn't have this kind of functionality - even the slightly-lower-level HttpWebRequest doesn't, as far as I'm aware. You'll need to use an HTTP library which gives you more control over exactly when things happen (which will no doubt involve more code as well, of course). The point of WebClient is to provide a very simple API to use in very simple situations.
As stated by Jon Skeet, this is not available in the Webclient not HttpWebRequest classes.
However, if you have control of the server, that receives the upload; perhaps you could upload small chunks of the file using WebClient, and have the server assemble the chunks when all has been received. Then it would be somewhat easier for you to make a Pause/resume functionality.
If you do not have control of the server, you will need to use an API that gives you mere control, and subsequently gives you more stuff to worry about. And even then, the server might give you a time-out if you pause for too long.
ok, with out giving you code examples I will tell you what you can do.
Write a WCF service for your upload, that service needs to use streaming.
things to remember:
client and server needs to identify
the file some how i suggest the use
of a Guid so the server knows what
file to append the extra data too.
Client needs to keep track of
position in the array so it knows
where to begin the streaming after it
resumes it. (you can even get the
server to tell the client how much
data it has but make sure the client
knows too).
Server needs to keep track of how
much data it has already downloaded
and how much still missing. files
should have a life time on the
server, you dont want half uploaded
and forgotten files stored on the
server forever.
please remember that, streaming does
not allow authentication since the
whole call is just one httprequest.
you can use ssl but remember that
will add a overhead.
you will need to create the service
contract at message level standard
method wont do.
I currently writing a Blog post about the very subject, It will be posted this week with code samples for how to get it working.
you can check it on My blog
I know this does not contain code samples but the blog will have some but all in all this is one way of doing stop and resume of file uploads to a server.
To do something like this you must write your own worker thread that does the actual http post stepwise.
Before sending a you have to check if the operation is paused and stop sending file content until it is resumed.
However depending on the server the connection can be closed if it isn't active for certain period of time and this can be just couple of seconds.
We are using a WCF service layer to return images from a repository. Some of the images are color, multi-page, nearly all are TIFF format. We experience slowness - one of many issues.
1.) What experiences have you had with returning images via WCF
2.) Do you have any suggestions tips for returning large images?
3.) All messages are serialized via SOAP correct?
4.) Does wcf do a poor job of compressing the large tiff files?
Thanks all!
Okay Just to second the responses by ZombieSheep and Seba Gomez, you should definitely look at streaming your data. By doing so you could seamlessly integrate the GZipStream into the process. On the client side you can reverse the compression process and convert the stream back to your desired image.
By using streaming there is a select number of classes that can be used as parameters/return types and you do need to modify your bindings throughout.
Here is the MSDN site on enabling streaming. This is the MSDN page that describes the restrictions on streaming contracts.
I assume you are also controlling the client side code, this might be really hard if you aren't. I have only used streaming when I had control of both the server and client.
Good luck.
If you are using another .Net assembly as your client, you can use two methodologies for returning large chunks of data, streaming or MTOM.
Streaming will allow you to pass a TIFF image as if it were a normal file stream on the local filesystem. See here for more details on the choices and their pros and cons.
Unfortunately, you're still going to have to transfer a large block of data, and I can't see any way around that, considering the points already raised.
I just wanted to add that it is pretty important to make sure your data is being streamed instead of buffered.
I read somewhere that even if you set transferMode to 'Streamed' if you aren't working with either a Stream itself, a Message or an implementation of IXmlSerializable, the message is not streamed.
Make sure you keep that in mind.
What bindings are you using? WCF will have some overheads, but if you use basic-http with MTOM you lose most of the base-64 overead. You'll still have the headers etc.
Another option would be to (wait for it...) not use WCF here - perhaps just a handler (ashx etc) that returns the binary.
Re compression - WCF itself won't have much hand in compression; the transport might, especially via IIS etc with gzip enabled - however, images are notorious for being hard to compress.
In a previous project I worked we had a similar issue. We had a Web Service in C# that received requests for medias. A media can range from files to images and was stored in a database using BLOB columns. Initially the web method that handled media retrieval requests read the chunk from the BLOB and returned in to the caller. This was one round trip to the server. The problem with this approach is that the client has no feedback of the progress of the operation.
There is no problem in computer
science that cannot be solved by an
extra level of indirection.
We started by refactoring the method in three methods.
Method1 setup the conversation between caller and the web service. This includes information about the request (like media Id) and capabilities exchange. The web service responded with a ticked Id which is used for the caller for future requests. This initial call is used for resource allocation.
Method2 is called consecutively until there is more that to be retrieved for the media. The call includes information about the current offset and the ticked Id that was provided when Method1 was called. The return updates the current position.
Method3 is called to finish request when Method2 reports that the reading of the request media has completed. This frees allocated resources.
This approach is practical because you can give immediate feedback to the user about the progress of the operation. You have a bonus that is to split the requests to Method2 in different threads. The progress than can be reported by chunk as some BitTorrent clients do.
Depending on the size of the BLOB you can choose to load it from the database on one go or reading it also by chunks. This means that you could use a balanced mechanism that based on a given watermark (BLOB size) chooses to load it in one go or by chunks.
If there is still a performance issue consider packaging the results using GZipStream or read about message encoders and specifically pay attention to the binary and Message Transmission Optimization Mechanism (MTOM).