I am working on an asp.net (Webforms, asp.net 2.0, Framework 3.5) application. It is 32 bit application running on IIS 7.0, with OS Windows 2008 R2 SP1
I am facing an issue with large file uploads. The files which are more than 20 MB or so. The application is able to upload large files however, it is noticed that after N number of uploads, the next set of uploads keep on failing until IIS is restarted.
The application supports concurrent file uploads. It is noticed that, single large file upload always works. Only when we start upload for more than 1 file, one of the uploads get stuck.
I tried looking at the temp folders in which posted file data gets uploaded and noticed that when the issue happens, the upload for the failing file never starts from server's view point as it never generates any temp file and after few sec, the request fails.
When the things fail,
CPU is all OK
W3wp stands at 2 GB memory usage (against total 4 GB RAM)
W3wp does not show an crash as the other pages of the application still works fine
I tried using wireshark to see network traffic, but it also say ERR_connection_RESET. Apart from that, I am not getting any clue.
I am suspecting below things but not sure how to conclude or fix.
1) To start concurrent uploads, server needs to cop up with data pumping rate from client side and when it is unable to match that, it must be failing internally. This could be due to server's inability to server concurrent requests.
2) Frequent large uploads increases the memory footprint of the application to an extent where it cannot work with concurrent uploads, because to dump the files at temporary location in chunked manner, RAM is still required
Here is my web config setting
<httpRuntime maxRequestLength="2097151" executionTimeout="10800" enableVersionHeader="false"/>
From the implementation perspective,
1) We have client side implementation written in Java script, which creates FormData and sends the XHR to server
2) Server has a method which gets called when complete file is copied to server's temp directory, and we extract the file data using Request.Files collection and then processes further
When issue happens, the server method gets called, but the Request.Files comes empty.
Please let me know if anyone have very good insight on this which can guide me to the root cause and fix.
UPDATE:
Client side code representation:
//Set HTTP headers
_http.setRequestHeader("x-uploadmethod", "formdata");
_http.setRequestHeader("x-filename", "Name of file");
// Prepare form data
var data = new FormData();
data.append("Name of file", File contents);
//Sends XHR request
_http.send(data);
Server side code representation:
HttpFileCollection files = Request.Files;
int Id = objUpload.UploadMyAssets(files[0]);
The logic in UploadMyAssets is taking files[0] as HttpPostedFile and then move ahead with application specific logic.
Thanks
I had the same issue. Turns out ASP.NET Default Session Manager is blocking with async streams over https (HTTP/2). Didn't happen over http (non-ssl).
Resolved this by using SessionStateBehavior.Readonly for the Controller Class. Related to this post:
ASP.Net Asynchronous HTTP File Upload Handler
Related
I am using Telerik Kendo File Upload for uploading folder.
In Production environment, few users are complaining issue with Folder Upload, during upload few files get errored out, using Developer tool in the console tab it logs "ERR_HTTP2_PROTOCOL_ERROR" error as attached for the failed files.
When i am trying i am not getting this error and all folders are getting uploaded properly. I asked user to share the files for which they are facing error and when i tried it uploaded successfully. When user tried again uploading same files which errored out it got succeeded today which were failing yesterday but sill there are files which is giving the same error.
I went through a post where it say the problem could be due to use of HTTP/2 and when they switched to HTTP /1.1 it worked fine. We are also using HTTP/2 but we don't have option of going back to HTTP/1.1. Link below :
https://www.telerik.com/forums/problems-with-multi-file-upload-and-http-2
Any suggestions ?
This is because on your clients machine http/2 is not enabled thus the error prompts.
If you look in your local machine you will see that under your server, you have Https protocol enabled and a valid certificate.
Your clients either lack a valid certificate on the server or are using the site through Http protocol.
you can learn more here:
Http/2 explanation
SETTINGS_MAX_CONCURRENT_STREAMS (0x3):
Indicates the maximum number of concurrent streams that the sender will allow. This limit is directional: it applies to the number of streams that the sender permits the receiver to create. Initially, there is no limit to this value. It is recommended that this value be no smaller than 100, so as to not unnecessarily limit parallelism.
A value of 0 for SETTINGS_MAX_CONCURRENT_STREAMS SHOULD NOT be treated as special by endpoints. A zero value does prevent the creation of new streams; however, this can also happen for any limit that is exhausted with active streams. Servers SHOULD only set a zero value for short durations; if a server does not wish to accept requests, closing the connection is more appropriate.
Resolution : : Add “Http2MaxConcurrentClientStreams” under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\HTTP\Parameters
In Registry and restart server.
Set this value to 100 or >100
I make a request to a REST API (let it be API 1) which internally calls another 2 APIs (APIs 2 & 3) synchronously.
API 1 = REST API
API 2 = Pre-signed url to upload a file into S3
API 3 = A DB update (SQL Server)
API 3 i.e., the DB update will be made only if the file is successfully uploaded (API 2) into S3.
In case the DB update (API 3) is failed, the changes API 2 did should be rolled back i.e., the uploaded file should be deleted in S3.
Please advise how to handle this scenario. Any out-of-the-box solution is welcome.
S3 services are not transactional. Basically all the rest apis are not transactional so all the operations are atomic:
What are atomic operations for newbies?
What this means is that you can't rollback the operation if it has succeeded.
It would be easy to say that it's ok. Once local db fails I can issue a delete call on s3 to delete my file. But what happens if this fails too?
Another way would be to first write to your database and then post the file. Again if the file upload fails you can rollback the db command. That is safer, but still... what happens when you send the request but you get a timeout? The file might be on the server but you just won't know.
Enter the world of eventual consistency.
While there are ways to mitigate the issue with retries (check polly library for test retries) what you can do is store the action.
You want to upload the file. Add it to a queue and run the task. Mark the task as failed. Retry as many times as you want and mark the failure reasons.
Here comes manual interventions. When all else fails, someone should intervene with some resolution strategy.
If you need to "undo" an upload to any file system (S3 is a file system) you do it like this.
Upload the new file with some temporary unique file name (a guid will do fine).
To "commit" the upload, remove the file you're replacing and rename the one you just uploaded so it has the same name as the old one.
To "roll back" the upload, remove the temp file.
An even better way, if your application allows it, is to give each version of the file a different name. Then you just upload each new one with its own name, and clean up by deleting the old ones.
In your particular scenario, it might make sense to do your database update operation first, and the upload second, if that won't open you up to a nasty race condition.
The problem:
My company puts out a monthly newsletter which I host on our internal website. I have a page for the author of the newsletter to upload the latest version. Once the author has uploaded the latest newsletter, he sends a broadcast email to announce the new newsletter. Employees invariably check the new newsletter and send feedback to the author with corrections that need to be made.
Once the author has made the necessary corrections (typically within an hour of sending the broadcast email), he revisits my page and replaces the latest version with the updated newsletter.
Immediately following the replacement (or update, if you will) of the newsletter, anyone attempting to access it gets a 500 - Internal Server Error.
My IT guy who maintains the server cannot delete/rename/move the file because of a permissions error and has to do a lot of convoluted things to get the file deleted (and once the file is deleted, the author of the newsletter can re-upload the corrected copy and it works fine.
My IT guy and I are pretty sure that the problem stems from that I'm trying to replace the file while IIS is actively serving it to users (which I thought of and thought that I had coded against happening).
The code that runs the replacement is as follows:
Protected Sub ReplaceLatestNewsletter()
Dim dr As DataRow
Dim sFile As String
Dim mFileLock As Mutex
Try
If Me.Archives.Rows.Count > 0 Then
dr = Me.Archives.Rows(0)
sFile = dr("File").ToString
If dr("Path").ToString.Length > 0 Then
mFileLock = New Mutex(True, "MyMutexToPreventReadsOnOverwrite")
Try
mFileLock.WaitOne()
System.IO.File.Delete(dr("Path").ToString)
Catch ex As Exception
lblErrs.Text = ex.ToString
Finally
mFileLock.ReleaseMutex()
End Try
End If
fuNewsletter.PostedFile.SaveAs(Server.MapPath("~/Newsletter/archives/" & sFile))
End If
Catch ex As Exception
lblErrs.Text = ex.ToString
End Try
dr = Nothing
sFile = Nothing
mFileLock = Nothing
End Sub
I thought the Mutex would take care of this (although after re-reading documentation I'm not sure I can actually use it like I'm trying to). Other comments on the code above:
Me.Archives is a DataTable stored in ViewState
dr("File").ToString is the filename (no path)
dr("Path").ToString is the full local machine path and filename (i.e., 'C:\App_Root\Newsletters\archives\20120214.pdf')
The filenames of the newsletters are set to "YYYYMMDD.pdf" where YYYYMMDD is the date (formatted) of the upload.
In any case, I'm pretty sure that the code above is not establishing an exclusive lock on the file so that the file can be overwritten safely.
Ultimately, I would like to make sure that the following happens:
If IIS is currently serving the file, wait until IIS has finished serving it.
Before IIS can serve the file again, establish an exclusive lock on the file so that no other process, thread, user (etc.) can read from or write to the file.
Either delete the file entirely and write a new file to replace it or overwrite the existing file with the new content.
Remove the exclusive lock so that users can access the file again.
Suggestions?
Also, can I use a Mutex to get a mutually exclusive lock on a file in the Windows filesystem?
Thank you in advance for your assistance and advice.
EDIT:
The way that the links for the newsletter are generated is based on the physical filename. The method used is:
Get all PDF files in the "archives" directory. For each file:
Parse the date of publication from the filename.
Store the date, the path to the file, the filename, and a URL to each file in a DataRow in a DataTable
Sort the DataTable by date (descending).
Output the first row as the current issue.
Output all subsequent rows as "archives" organized by year and month.
UPDATE:
In lieu of not being able to discern when all existing requests for that file have completed, I took a closer look at the first part of #Justin's answer ("your mutex will only have an effect if the process that reads from the file also obtains the same mutex.")
This led me to Configure IIS7 to server static content through ASP.NET Runtime and the linked article in the accepted answer.
To that end, I have implemented a handler for all PDF files which implements New Mutex(True, "MyMutexToPreventReadsOnOverwrite") to ensure that only one thread is doing something with the PDF at any given time.
Thank you for you answer, #Justin. While I did not wind up using the implementation you suggested, your answer pointed me towards an acceptable solution.
Your mutex will only have an effect if the process that reads from the file also obtains the same mutex. What is the method used to serve up the file? Is ASP.Net used or is this just a static file?
My workflow would be a little different:
Write the new newsletter to a new file
Have IIS start serving up the new file instead of the old one for the given Newsletter url
Delete the old file once all existing requests for that file have completed
This requires no locking and also means that we don't need to wait for requests for the current file be completed (something which could potentially take an indefinite amount of time if people keep on making new requests). The only interesting bit is step 2 which will depend on how the file is served - the easiest way would probably be to either set up a HTTP redirect or use URL rewriting
HTTP Redirect
A HTTP Redirect is where the server tells the client to look in a different place when it gets a request for a given resource so that the browser URL is automatically updated to match the new location. For example if the user requested http://server/20120221.pdf then they could be automatically redirected to another URL such as http://server/20120221_v2.pdf (the URL shown in the browser would change however the URL they need to type in would not).
You can do this in IIS 7 using the httpRedirect configuration element, for example:
<configuration>
<system.webServer>
<httpRedirect enabled="true" exactDestination="true" httpResponseStatus="Found">
<!-- Note that I needed to add a * in for IIS to accept the wildcard even though it isn't used in this case -->
<add wildcard="*20120221.pdf" destination="20120221_v2.pdf" />
</httpRedirect>
</system.webServer>
</configuration>
The linked page shows how to change these settings from ASP.Net
Url Rewriting
Alternatively IIS can be set up to automatically serve up the content of a different file for a given URL without the client (the browser) ever knowing the difference. This is called URL rewriting and can be done in IIS using something like this however it does require that additional components be installed to IIS to work.
Using a HTTP Redirect is probably the easiest method.
I have a WCF Service that returns a byte array with a Zip file (50MB) to any client that requests it. If the Zip is very small (say 1MB), the SOAP response is coming from WCF with the byte array embedded in it. But the response size is very huge even for a 1MB file. If I try to transfer the 50MB file the service hangs and throws an out of memory exception, because the SOAP response becomes huge in size.
What is the best option available with WCF / web service to transfer large files (mainly ZIP format) as I am sending back a byte array. Is there any good approach instead of that for sending back the file?
Whether WCF / web service is best way to transfer large files to any client or is there any other better option/technology available so that interoperability and scalability for 10,000 users can be achieved?
My Ccode is below:
String pathfordownload = #"D:\New Folder.zip";
FileStream F2D = new FileStream(pathfordownload, FileMode.Open,FileAccess.Read);
BinaryReader binReader = new BinaryReader(F2D);
binReader.BaseStream.Position = 0;
byte[] binFile = binReader.ReadBytes(Convert.ToInt32 (binReader.BaseStream.Length));
binReader.Close();
return binFile;
A working piece/real piece of information will be really helpful as I am struggling with all the data available in Google and have had no good results for last week.
You can transfer a Stream through WCF and then you can send (almost) limitless length files.
I've faced the exact same problem. The out of memory is inevitable because you are using Byte arrays.
What we did is to flush the data on the hard drive, so in stead of being limited by your virtual memory your capacity for concurrent transactions is the HD space.
Then for transfer, we jut placed the file on the other computer. Of course in our case it was a server to server file transfer. If you want to de decoupled form the peer, you can use a file download in http.
So instead than responding with a file, your service could respond with a http url to the file location. Then when the client has successfully downloaded form the server with a standard HttpRequest or WebClient it calls a method to delete the file. In SOAP that could be Delete(string url), in REST that would be delete method on the resource.
I hope this makes sense to you. The most importnat part of this is to understand that in a scalable software especially if you are looking at 10000 clients (concurrent?) is that you may not use resources that are limited, like memory streams or byte arrays. But rather rely on large and easily expandable resources like a hard drive partition that coule eventually be on a SAN and IT could grow the partition as needed.
I am uploading files using HttpWebRequest to an ASP.Net MVC application but, for some reason unknown to me, it is failing to upload consistently.
I know the file is good since if you try enough times it does eventually upload and can be viewed on the server just fine. When it fails, neither the server nor client reports any errors directly related to the upload, the upload just stops partway through at a random location and time and my MVC action method is called without the file being loaded (Request.Files.Count == 0).
This only seems to be a problem in our production environment over DSL. The test and development environment works fine and the production environment works fine from in the office (really fast connection to servers) but fails when running it from home over DSL.
As you can see below, the point where it fails is pretty basic.
[Authorize]
[AcceptVerbs(HttpVerbs.Put | HttpVerbs.Post)]
[ValidateInput(false)]
public int UploadScene(int sceneID, int tourID, string name, int number, PhotoType photoType)
{
SceneInfo scene;
if (Request.Files.Count < 1) throw new InvalidOperationException("Image file not uploaded.");
// process file...
}
It seems that it is probably configuration, but I can't figure what it might be. We are running in a cluster (we have 4 web servers) so it might have something to do with that, but I am testing against a single server (I can isolate the machine by name and can verify that it is processing my requests). I have also made sure that it is running in it's own app pool. What else should I check?
We are using IIS6 and .Net 3.5 on the servers.
Have you tried wrapping your form in the proper < form > tag?
<% using (Html.BeginForm("Action", "Controller", FormMethod.Post, new { #enctype = "multipart/form-data" })) { %>
I checked out the event viewer and noticed the app pool was recycling due to a virtual memory check. I turned that off and was able to upload over 20 images without a problem.
Of course this doesn't explain why recycling causes the file upload to fail immediately. I was under the impression that the old pool would continue processing any existing requests until they are complete or the shutdown time limit (we have it setup as 10 minutes in order to handle file uploads).