How to implement an asynchrone call to the HttpResponse.Flush() method using .net 4.0 & VS2013 ?
I tried delegate:
var caller = new AsyncFlush(context.Response.Flush);
var result1 = caller.BeginInvoke(null, null);
caller.EndInvoke(result1);
then task:
Task.Factory.StartNew(() => context.Response.Flush()).Start();
and finally thread:
new Thread(new ThreadStart(() => context.Response.Flush()).Start();
But each case seem freeze my internet explorer when flushing larges files (1GB+). Any idea?
Regards.
Whether your flush the response or not does not matter. It also does not matter what chunk size you use when writing to the response object. Client and server communicate over the TCP protocol which does not preserve or communicate chunk sizes in any way. The client is never impacted by the way the server wrote. The client can't even tell the difference if it wanted to. It's an implementation detail of the server.
The reason why your browser "freezes" is unknown but it is not the way you flush data. Browsers have no trouble downloading arbitrarily sized files.
Note, that all three of the code samples you posted are either slightly harmful and pointless or do not work at all. You need to throw this away. Look elsewhere for finding the reason for the freeze.
Your approach for creating async wrapper is fine. But here are few things you should know.
Response.Flush() forces the complete buffer to send to the client. So try to avoid sending a complete 1 Gig+ data on the client at once. This might engage the client processing that huge buffer and might endup with hangs.
Rather than sending the huge buffer once to the client try sending the stream into chunks and use flush for each chunk, so that client doesn't hang during processing your request.
See this KB for writing a huge file in chunks to the response using Response.Flush multiple times.
Related
Forgive me if this is covered somewhere but my google skills have failed me and there doesn't appear to be anything that covers this specific problem. I've come to use tcpclient for the first time (TcpClient and NetworkStream with StreamReader and StreamWriter) and there's a few intricacies that i'm trying to understand.
Background
I'm communicating with a printer, I open up the network stream reader that loops infinitely and parses incoming data. Outward commands are sent asynchronously from user inputs on the ui. All good so far.
A lot of examples show you sending data and then waiting upon the response, fine in most circumstances. My issue is that the printer can randomly send me data that I have to respond to (out of ink / faults etc) and I send commands to it asynchronously. I'm also paranoid that it may not always process commands in order.
I know the size of the expected responses and I can always split the commands out from the stream reliably (they always use start and end characters). The issue is that a lot of the responses have the same size and format.
My initial thought was to create a queue of outgoing commands, when a response comes back I can check it against the first in the queue to see if it matches the expected return format and the others if it doesn't.
If it doesn't match anything in the queue, treat it as a new responses and try and figure out what it might have sent me.
Question
I guess simply put, are my assumptions correct? (I haven't experienced these problems yet, but I don't want to be surprised in production) and are there any commonly accepted ways of dealing with this type of scenario?
Thanks
Problem
This has been driving me insane for days. We have a process here to import records from a csv file to the database, through a admin page which resides in a ASP.NET Web Forms (.NET 4.0) project. The process itself was too slow and I was responsible to make it faster. I started by changing the core logic, which gave a good performance boost.
But if I upload large files (well relatively large, about 3 MB tops), I have to wait until the upload process finishes until I start importing, and I don't return any progress to the client while I do so. The process itself is not that long, it takes about 5 to 10 seconds to complete, and yes, I've considered creating a separate Task and polling the server, but I thought it was overkill.
What have I done so far?
So, to fix this issue, I've decided to read the incoming stream and import the values while i'm reading. I created a generic handler (.ashx), and put the following code inside void ProcessRequest(HttpContext context):
using (var stream = context.Request.GetBufferlessInputStream())
{
}
First I remove the headers, then I read the stream (through a StreamReader) until I get a CRLF, convert the line to my model object, and keep reading the csv. When I get 200 records or so, I bulk update all of them to the database. Then, I keep getting more records until I end the file or get 200 records.
This seems to be working, but then, I've decided to stream my response as well. First, I disabled BufferOutput:
context.Response.BufferOutput = false;
Then, I added those headers to my response:
context.Response.AddHeader("Keep-Alive", "true");
context.Response.AddHeader("Cache-Control", "no-cache");
context.Response.ContentType = "application/X-MyUpdate";
Then, after sending those 200 records to the database, I write a response:
response.Write(s);
response.Flush();
s is a string with a fixed size of 256 chars. I know 256 chars doesn't always equate 256 bytes, but I was just being sure I wouldn't write walls of text to the response and mess something up.
Here's its format:
| pipeline (record delimiter)
1 or 0 success or failure
; delimiter
error message (if applicable)
| pipeline (next demiliter and so on)
Example:
|0;"Invalid price on line 123"|1;|1;|0;"Invalid id on line 127"|
On the client side, here's what I have (just the request part):
function import(){
var formData = new FormData();
var file = $('[data-id=file]')[0].files[0];
formData.append('file', file);
var xhr = new XMLHttpRequest();
var url = "/Site/Update.ashx";
xhr.onprogress = updateProgress;
xhr.open('POST', url, true);
xhr.setRequestHeader("Content-Type", "multipart/form-data");
xhr.setRequestHeader("X-File-Name", file.name);
xhr.setRequestHeader("X-File-Type", file.type);
xhr.send(formData);
}
function updateProgress(evt){
debugger;
}
What happened :(
It doesn't send the data immediately to the client when I call response.Flush. I understand there is buffering from the client-side, but it doesn't seem to be working at all, even when I send a lot of dummy data to bypass this issue.
After some time, when I write too much stuff on Response.Write, the method will become slower and slower until it hangs. Same with Response.Flush. I guess I'm missing something here.
I created a simple webforms project to test what I've been trying to do. It has a generic handler which will return a number each second for 10 seconds. It actually updates (not always on a 1-sec fashion) and I can see the ongoing progress.
When I write just a few lines to the response, it actually shows progress, but ALWAYS after the whole process is almost finishing. The main problem is when I get errors and I try to write these to the response. They're longer than success strings because they contain the error message.
I assume that if I write Response.Flush it's not 100% guaranteed to go to the client, correct? Or is the client itself the problem? If it is the client, why does the server hang when I call Response.Write too much?
EDIT: As an addendum, if I throw the same piece of code into a aspx page, it works. So I believe it has something to do with the xhr (XMLHttpRequest) itself, which is not prepared to process streaming data, it seems.
I'll be glad to give more information if needed.
After another day bashing my head on this one, I think I finally got it. For those interested, I'm going to post the answer here.
First of all, I told my intention was to read the stream and process the csv file at the same time, right?
using (var stream = context.Request.GetBufferlessInputStream())
{
}
The first problem I didn't account for was the fact that I did everything in a synchronous fashion. So I would only continue reading the stream as I processed the files. While it makes sense, it's not optimal since I can read the file faster than I can analyze and update the data.
However, the issue lies in hanging the upload process because of this. I tried to write using Response.Write before I read the whole csv file. Long story short, I was trying to send a response before I got my request completely.
I'm not sure what's the expected behavior for Response.Write when it's executed before the whole request is read, but something tells me it's impossible for it to send information to the client at the same time the client is sending information for the server, unless I had some kind of full-duplex connection. I saw this question a few hours ago "HTTP pipelining - concurrent responses per connection", and despite it doesn't answer my question, the picture made me curious if the response could happen together with the request.
Then I found this link randomly, apparently from the working group who was charged with maintaining and developing the "core" specifications for HTTP: Can the response entity be transmitted before all the request entity has been read?, which basically says:
Can the response entity be transmitted before all the request entity has been read?
I have been implementing an HTTP/1.1 server which consumes the request
entity lazily, as need by the application.
If the application decides that it can generate some or all of the
response before it finishes reading the whole request entity, is that
allowed?
What is the answer if the status is not an error code. Can the server
begin transmitting the response entity before all of the request
entity has been read? (Assume that the server is intelligent enough
to avoid deadlock by always reading request data when it arrives).
It is replied with a short Yes, but the question is developed as the email replies go on. One thing that got my attention was specifically this part: server is intelligent enough to avoid deadlock by always reading request data when it arrives. I kept reading :
In short, the result of the response entity transmitted before all the request
entity has been read is unpredictable.
By the way, pure tunnelling leads to deadlock: the application can get
stuck writing if the client isn't reading the response until it
transmits all the request, and all the TCP windows fill up
It's not possible to omit the buffering somewhere: for clients which
send a whole request before reading the response, the entire request
has to be buffered or stored somewhere, either in the server or in
the application, to resolve the deadlock.
Response.Write was getting stuck, it seems.
Though I know forum talk is no official paper (even their own forum talk), I guess this gave me the insight needed to solve my problem.
I also tried to dig in .NET code to double check if that was the root of the problem, but then I ran into native calls and gave up.
So afterwards I changed my code to upload first, then import the data and spitting the results while doing so, and put 2 progress bars: one for uploading and another for processing.
It ran smoothly and worked as expected. No more hangs or slow calls to Response.Write.
If I wanted, I could import the data while uploading the file, but if and only if I started writing the response after I got all request data.
Mystery solved, thanks for everyone that read the question. I'll not accept my own answer yet, I'll wait for 2 or 3 days to check if anyone has a better explanation for this incident.
I'm developing a small online game in C#. Currently I am using simple sync TCP sockets. But now (because this is some kind of "learning project") I want to convert to asynchronous sockets. In the client I have the method: byte[] SendAndReceive(Opcode op, byte[] data).
But when I use async sockets this isn't possible anymore.
For example my MapManager class first checks if a map is locally in a folder (checksum) and if it isn't, the map will be downloaded from the server.
So my question:
Is there any good way to send some data and get the answer without saving the received data to some kind of buffer and polling till this buffer isn't null?
Check out IO Completion Ports and the SocketAsyncEventArgs that goes with it. It raises events when data has been transferred, but you still need a buffer. Just no polling. It's fast and pretty efficient.
http://www.codeproject.com/Articles/83102/C-SocketAsyncEventArgs-High-Performance-Socket-Cod
and another example on MSDN
http://msdn.microsoft.com/en-us/library/system.net.sockets.socketasynceventargs.aspx
A code example of what you have would help, but I'd suggest using a new thread for each socket connection with a thread manager. Lmk if that makes sense or if that' applicable here. :)
I've googled far and wide and found no answer to this. I am programming my own little Tcp library to make it easy for myself. On the server I have a 'ConnectedClient' object that has a socket and a network stream. On the server static class I have a Send function that sends a length-prefixed stream. I want the stream to be thread safe, but for each client. Would this work for that?
Send(ConnectedClient client, ...(rest of parameters nor relevant))
{
lock (client.lockObject)
{
// Writing to stream thread-safely I hope...
}
}
I hope I made myself clear enough, if not, just ask for more details.
It looks like you are writing some kind of multiplexer. Indeed, that should work fine as long as you write an entire payload (and length-prefix) within a single lock, and as long as the lockObject is representative of the mutual-exclusive resource (i.e. must be a common lockObject for all clients that we don't want to collide).
Perhaps the trickier question is: are you going to read the reply within that method (success/return-value/critical-fail), or are you going to read the reply asynchronously, and let the next writer write to the stream while the first message is flying...
For comparison, when writing BookSleeve (a redis multiplexer, full source available if you want some reference code), I chose a different strategy: one dedicated thread to do all the writing to the thread, with all the callers simply appending to a thread-safe queue; that way, even if there is a backlog of work, the callers aren't delayed.
I'm looking for a way to pause or resume an upload process via C#'s WebClient.
pseudocode:
WebClient Client = new WebClient();
Client.UploadFileAsync(new Uri("http://mysite.com/receiver.php"), "POST", "C:\MyFile.jpg");
Maybe something like..
Client.Pause();
any idea?
WebClient doesn't have this kind of functionality - even the slightly-lower-level HttpWebRequest doesn't, as far as I'm aware. You'll need to use an HTTP library which gives you more control over exactly when things happen (which will no doubt involve more code as well, of course). The point of WebClient is to provide a very simple API to use in very simple situations.
As stated by Jon Skeet, this is not available in the Webclient not HttpWebRequest classes.
However, if you have control of the server, that receives the upload; perhaps you could upload small chunks of the file using WebClient, and have the server assemble the chunks when all has been received. Then it would be somewhat easier for you to make a Pause/resume functionality.
If you do not have control of the server, you will need to use an API that gives you mere control, and subsequently gives you more stuff to worry about. And even then, the server might give you a time-out if you pause for too long.
ok, with out giving you code examples I will tell you what you can do.
Write a WCF service for your upload, that service needs to use streaming.
things to remember:
client and server needs to identify
the file some how i suggest the use
of a Guid so the server knows what
file to append the extra data too.
Client needs to keep track of
position in the array so it knows
where to begin the streaming after it
resumes it. (you can even get the
server to tell the client how much
data it has but make sure the client
knows too).
Server needs to keep track of how
much data it has already downloaded
and how much still missing. files
should have a life time on the
server, you dont want half uploaded
and forgotten files stored on the
server forever.
please remember that, streaming does
not allow authentication since the
whole call is just one httprequest.
you can use ssl but remember that
will add a overhead.
you will need to create the service
contract at message level standard
method wont do.
I currently writing a Blog post about the very subject, It will be posted this week with code samples for how to get it working.
you can check it on My blog
I know this does not contain code samples but the blog will have some but all in all this is one way of doing stop and resume of file uploads to a server.
To do something like this you must write your own worker thread that does the actual http post stepwise.
Before sending a you have to check if the operation is paused and stop sending file content until it is resumed.
However depending on the server the connection can be closed if it isn't active for certain period of time and this can be just couple of seconds.