I have a application, where the user can upload xml files. Everything under 25mb is no problem, but when i try to upload my test file (117 mb), I get an error when the application is hosted.
Since I prepared the application like described in 1000 other posts, its working locally up to 2gb and also when hosted. But after upload i get "HTTP Error 503.0 - Service Unavailable".
When i log in again, the file is there but the error is inconvenienced.
web.config:
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="1073741824" />
</requestFiltering>
</security>
upload function:
[HttpPost]
[RequestFormLimits(MultipartBodyLengthLimit = 1073741824)]
public IActionResult Upload(IFormFile file){
if(file == null) return RedirectToAction("OnixIndex");
string completePath = app_resources_path + file.FileName;
using(FileStream fs = System.IO.File.Create(completePath))
{
file.CopyTo(fs);
fs.Flush();
}
startup.cs
services.Configure<FormOptions>(options =>
{
options.ValueLengthLimit = int.MaxValue;
options.MultipartBodyLengthLimit = int.MaxValue;
options.MultipartHeadersLengthLimit = int.MaxValue;
});
Should i use a different way to upload the file? I tryed a stream, but without improvement. I there a different technology or nuget bib?
could you please list system specs (Ram,Storage Type,Cpu Cores, .net version),
this error is mostly related to system specs not being able to handle the 1 gb file and you need to scale your system as mentioned in the following thread.
503 Server Unavailable after user uploads many files. Is this an httpRuntime setting?
Related
I do have a C# ASP.NET Web API on .NET Framework app running perfectly fine on local machine. However, when I deploy this to Azure Web App Service, and when attempting to access one of the endpoints, I am getting an error:
What I am not understanding is where the folder \api\print in the error coming from? My endpoint is doing two things:
Create a PDF file. This is working fine since I can see the created file in the wwwroot\app_data folder
Upload the created PDF file to Google Drive. This is where the code is failing. I am suspecting because the app can not open the file.
Below is the code for the actions above:
public GoogleDrivePDFFile CreateTicket()
{
//directory for created files as per stakeoverflow question https://stackoverflow.com/questions/1268738.
string tickets_path = HostingEnvironment.MapPath("~/App_Data/");
//current date and time
var current_date_time = DateTime.UtcNow;
string google_file_name = string.Concat(
current_date_time.Year,
current_date_time.Month,
current_date_time.Day,
current_date_time.Hour,
current_date_time.Minute,
current_date_time.Second,
"_", "XXXX",
".pdf"
); //the filename to use for the new created file
//full path for the new file in the filesytem
string filName = Path.Combine(tickets_path, google_file_name);
//file is being created okay in the filesystem. I can see it in Azure app file system.
PdfFormatProvider provider = new PdfFormatProvider();
using (Stream output = File.Open(filName, FileMode.CreateNew))
{
provider.Export(document, output);
}
GoogleUploads googleUploads = new GoogleUploads();
//It is failing here.....
var returned_file = googleUploads.UploadTicket(google_file_name, filName, requiredDocument);
/*
* I have tested the endpoint with the below and this works fine.
*
var working_example = new GoogleDrivePDFFile();
working_example.DocumentId = "Document ID .... okay";
working_example.DownloadLink = "Download Link .... okay";
working_example.WebViewLink = "Web View Link .... okay";
return working_example;
*/
return returned_file;
}
I am not sure what am I doing wrong.
In my case, my Google Security Keys file was not uploaded to Azure Web App File System. This is the reason why my code was not working. I upload the keys file and my app worked.
Problem
I'm trying to create an ASP.NET Core (3.1) web application that accepts file uploads and then breaks it into chunks to send to Sharepoint via MS Graph API. There are a few other posts here that address the similar questions but they assume a certain level of .NET knowledge that I don't have just yet. So I'm hoping someone can help me cobble something together.
Configure Web server & app to Accept Large Files
I have done the following to allow IIS Express to upload up to 2GB files:
a) created a web.config file with the following code:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<location path="Home/UploadFile">
<system.webServer>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
</handlers>
<security>
<requestFiltering>
<!--unit is bytes => 2GB-->
<requestLimits maxAllowedContentLength="2147483647" />
</requestFiltering>
</security>
</system.webServer>
</location>
</configuration>
B) I have the following in my Startup.cs Configuration section:
//Add support for uploading large files TODO: DO I NEED THIS?????
services.Configure<FormOptions>(x =>
{
x.ValueLengthLimit = int.MaxValue; // Limit on individual form values
x.MultipartBodyLengthLimit = int.MaxValue; // Limit on form body size
x.MultipartHeadersLengthLimit = int.MaxValue; // Limit on form header size
});
services.Configure<IISServerOptions>(options =>
{
options.MaxRequestBodySize = int.MaxValue; //2GB
});
Here's what my form looks like that allows the user to pick the file and submit:
#{
ViewData["Title"] = "Messages";
}
<h1>#ViewData["Title"]</h1>
<p></p>
<form id="uploadForm" action="UploadFile" method="post" enctype="multipart/form-data">
<dl>
<dt>
<label for="file">File</label>
</dt>
<dd>
<input id="file" type="file" name="file" />
</dd>
</dl>
<input class="btn" type="submit" value="Upload" />
<div style="margin-top:15px">
<output form="uploadForm" name="result"></output>
</div>
</form>
Here's what the controller looks like:
[HttpPost]
[RequestSizeLimit(2147483647)] //unit is bytes => 2GB
[RequestFormLimits(MultipartBodyLengthLimit = 2147483647)]
public async void UploadFile()
{
User currentUser = null;
currentUser = await _graphServiceClient.Me.Request().GetAsync();
//nothing have to do with the file has been written yet.
}
When the user clicks on the file button and chooses a large file, I no longer get IIS 413 error messages. Great. The logic hits the right method in my controller.
But I have the following questions for this part of the code:
When the user picks the file ... what is actually happening under the hood? Has the file actually been stuffed into my form and is accessible from my controller?
Is it a stream?
how do i get to the file?
If ultimately, I need to send this file to Sharepoint using this type of an approach (the last example on chunking), it seems that the best approach is to save the file on my server somewhere... and then copy the sample code and try to chunk it out? The sample code seems to be referring to file paths and file sizes, I'm assuming I need to persist it to my web server somewhere first, and then take it from there.
if i do need to save it, can you point me in the right direction - maybe some sample code that shows me how to take the POSTed data in my form and save it?
ultimately, this will need to be refactored os that there is not GUI ... but it's just an API that accepts large files to upload somewhere. But I think i'll try to learn how to do it this way first... and then refactor to change my code to be API only.
Sorry for the noob questions. I have tried to do my research before posting here. But somethings are still a bit fuzzy.
EDIT 1
Per the suggestion in one of the posted answers, i've downloaded sample code that demonstrates how to bypass saving to a local file on the web server. It's based on this article
I have created a web.config file again -to avoid the 413 errors from IIS. I have also edited the list of allowed file extensions to support .pdf and .docx and .mp4.
When I try to run the sample project, and I choose the "Stream a file with AJAX to a controller endpoint" under the "Physical Storage Upload Examples" section, it dies here:
// This check assumes that there's a file
// present without form data. If form data
// is present, this method immediately fails
// and returns the model error.
if (!MultipartRequestHelper
.HasFileContentDisposition(contentDisposition))
if (!MultipartRequestHelper
.HasFileContentDisposition(contentDisposition))
{
ModelState.AddModelError("File",
$"The request couldn't be processed (Error 2).");
// Log error
return BadRequest(ModelState);
}
As is mentioned in the comments above the code, it's checking for form data and then when it finds it... it dies. So i've been playing around with the HTML page which looked like this:
<form id="uploadForm" action="Streaming/UploadPhysical" method="post"
enctype="multipart/form-data" onsubmit="AJAXSubmit(this);return false;">
<dl>
<dt>
<label for="file">File</label>
</dt>
<dd>
<input id="file" type="file" name="file" />asdfasdf
</dd>
</dl>
<input class="btn" type="submit" value="Upload" />
<div style="margin-top:15px">
<output form="uploadForm" name="result"></output>
</div>
</form>
And I've tried to remove the form like this:
<dl>
<dt>
<label for="file">File</label>
</dt>
<dd>
<input id="file" type="file" name="file" />
</dd>
</dl>
<input class="btn" type="button" asp-controller="Streaming" asp-action="UploadPhysical" value="Upload" />
<div style="margin-top:15px">
<output form="uploadForm" name="result"></output>
</div>
But the button doesn't do anything now when I click it.
Also, in case you're wondering / it helps, I manually copied in a file into the c:\files folder on my computer and when the sample app opens, it does list the file - proving it can read the folder.
I added read /write permissions so hopefully the web app can write to it when I get that far.
I've implemented a similar large file controller but using mongoDB GridFS.
In any case, streaming is the way to go for large files because it is fast and lightweight.
And yes, the best option is to save the files on your server storage before you send.
One suggestion is, add some validations to allow specefic extensions and restrict execution permissions.
Back to your questions:
The entire file is read into an IFormFile, which is a C# representation of the file used to process or save the file.
The resources (disk, memory) used by file uploads depend on the number and size of concurrent file uploads. If an app attempts to buffer too many uploads, the site crashes when it runs out of memory or disk space. If the size or frequency of file uploads is exhausting app resources, use streaming.
source 1
The CopyToAsync method enables you to perform resource-intensive I/O operations without blocking the main thread.
source 2
Here you have examples.
Example 1:
using System.IO;
using Microsoft.AspNetCore.Http;
//...
[HttpPost]
[Authorize]
[DisableRequestSizeLimit]
[RequestFormLimits(ValueLengthLimit = int.MaxValue, MultipartBodyLengthLimit = int.MaxValue)]
[Route("upload")]
public async Task<ActionResult> UploadFileAsync(IFormFile file)
{
if (file == null)
return Ok(new { success = false, message = "You have to attach a file" });
var fileName = file.FileName;
// var extension = Path.GetExtension(fileName);
// Add validations here...
var localPath = $"{Path.Combine(System.AppContext.BaseDirectory, "myCustomDir")}\\{fileName}";
// Create dir if not exists
Directory.CreateDirectory(Path.Combine(System.AppContext.BaseDirectory, "myCustomDir"));
using (var stream = new FileStream(localPath, FileMode.Create)){
await file.CopyToAsync(stream);
}
// db.SomeContext.Add(someData);
// await db.SaveChangesAsync();
return Ok(new { success = true, message = "All set", fileName});
}
Example 2 with GridFS:
[HttpPost]
[Authorize]
[DisableRequestSizeLimit]
[RequestFormLimits(ValueLengthLimit = int.MaxValue, MultipartBodyLengthLimit = int.MaxValue)]
[Route("upload")]
public async Task<ActionResult> UploadFileAsync(IFormFile file)
{
if (file == null)
return Ok(new { success = false, message = "You have to attach a file" });
var options = new GridFSUploadOptions
{
Metadata = new BsonDocument("contentType", file.ContentType)
};
using (var reader = new StreamReader(file.OpenReadStream()))
{
var stream = reader.BaseStream;
await mongo.GridFs.UploadFromStreamAsync(file.FileName, stream, options);
}
return Ok(new { success = true, message = "All set"});
}
You are on the right path, but as others have pointed out Microsoft have put up a well written document on file uploading which is a must read in your situation - https://learn.microsoft.com/en-us/aspnet/core/mvc/models/file-uploads?view=aspnetcore-6.0#upload-large-files-with-streaming.
As for your questions
do you need services.Configure<FormOptions>(x =>
No you don't! And you don't need services.Configure<IISServerOptions>(options => either, its read from the maxAllowedContentLength that you have configured in your web.config
When the user picks the file ... what is actually happening under the hood? Has the file actually been stuffed into my form and is accessible from my controller?, Is it a stream?
If you disable the form value model binding and use the MultipartReader the file is streamed and won't be cached into memory or disk, as you drain the stream, more data will be accepted from the client(the browser)
how do i get to the file?
Check the document above, there is a working sample for accessing the stream.
If ultimately, I need to send this file to Sharepoint using this type of an approach (the last example on chunking), it seems that the best approach is to save the file on my server somewhere... and then copy the sample code and try to chunk it out? The sample code seems to be referring to file paths and file sizes, I'm assuming I need to persist it to my web server somewhere first, and then take it from there.
Not necessarily, using the streaming approach you can copy the stream data directly.
I have a task to allow uploads from an internal application that's built in .net5 razor pages of greater than 2gb per file. I have changed all the settings in the web.config and on server to allow these files to upload but still am greeted with a 400 error when trying.
<system.web>
<httpRuntime executionTimeout="240" maxRequestLength="20480" />
</system.web>
<requestFiltering>
<requestLimits maxAllowedContentLength="3147001541" />
</requestFiltering>
I am using the following to upload the files
var path = Path.Combine(targetFileName, UploadedFile.FileName);
using (var stream = new FileStream(path, FileMode.Create))
{
await UploadedFile.CopyToAsync(stream);
}
then after that it just saves the location in the DB of where that file was copied.
system.web settings are only used by ASP.NET (not Core). In ASP.NET Core you change the limit for your action by adding the RequestSizeLimitAttribute or DisableRequestSizeLimitAttribute. Additionally you will likely need to increase form data limit by adding RequestFormLimitsAttribute.
MVC:
[RequestSizeLimit(3147001541)]
[RequestFormLimits(MultipartBodyLengthLimit = 3147001541)]
public async Task<IActionResult> Upload(IFormFile file)
{
// ...
}
Razor Pages:
#attribute [RequestSizeLimit(3147001541)]
#attribute [RequestFormLimits(MultipartBodyLengthLimit = 3147001541)]
See documentation and this question for details.
I'm trying to use Couchbase from .NET with the official SDK. I'm hosting the Couchbase cluster in Amazon EC2.
The machine I'm trying to connect from is hosted in Microsoft Azure.
For some reason, opening a bucket is extremely slow, this afternoon it took ~3 seconds to open a bucket, but now for some reason it's more than 10 seconds.
The memory and CPU utilization of the Couchbase servers are very low.
My configuration:
<couchbaseClients>
<couchbase useSsl="false" operationLifespan="1000">
<servers>
<!-- Ip addresses obscured... -->
<add uri="http://1.1.1.1:8091/pools"></add>
<add uri="http://1.1.1.2:8091/pools"></add>
</servers>
</couchbase>
</couchbaseClients>
The code I'm trying:
var cluster = new Cluster("couchbaseClients/couchbase");
using (var bucket = cluster.OpenBucket("bucketname")) // This is taking 10-50 seconds.
{
var obj = new TestClass { };
// This is fast
var result = bucket.Insert(new Document<TestClass> { Content = obj, Expiry = 300000, Id = Guid.NewGuid().ToString() });
}
using (var bucket = cluster.OpenBucket("bucketname")) // This is taking 10-50 seconds.
{
var obj = new TestClass { };
// This is fast
var result = bucket.Insert(new Document<TestClass> { Content = obj, Expiry = 300000, Id = Guid.NewGuid().ToString() });
}
using (var bucket = cluster.OpenBucket("bucketname")) // This is taking 10-50 seconds.
{
var obj = new TestClass { };
// This is fast
var result = bucket.Insert(new Document<TestClass> { Content = obj, Expiry = 300000, Id = Guid.NewGuid().ToString() });
}
I tried to optimize by using ClusterHelper.GetBucket() instead of cluster.OpenBucket(), but the first GetBucket() is still very slow (everything else is fast in that case).
I experimented with other calls on the bucket (Get, Contains, etc.), and everything is fast, only opening the bucket itself is slow.
How can I troubleshoot what the problem is?
UPDATE: I set up logging based on #jeffrymorris' suggestion, and the I see the following error messages at the point where we spend a lot of time:
DEBUG Couchbase.Configuration.Server.Providers.ConfigProviderBase - Bootstrapping with 127.0.0.1:11210
INFO Couchbase.IO.ConnectionPool... Node 127.0.0.1:11210 failed to initialize, reason: System.Net.Sockets.SocketException (0x80004005): A connection attempt failed because the connected party did not properly respond after a period of time...
And after this there are three other exceptions about trying to connect to 127.0.0.1:11210, and there are seconds passed between those messages, so this is causing the slowness.
Why is the client trying to connect to localhost? I don't have that anywhere in my configuration.
I debugged the code of couchbase-net-client, and this is happening when I call OpenBucket() in CarrierPublicationProvider.GetConfig(). It gets the bucket configuration object, and for some reason it contains not one of the servers I configured in the web.config, but localhost:8091 instead. I'm still trying to figure out why this is happening.
(This is probably not happening on my dev machine, because there I have a local Couchbase server installation. If I stop that, then it gets slow there as well, and I'm seeing the same Bootstrapping with 127.0.0.1:11210 error messages in the logs.)
Figuring out the problem took some debugging of the Couchbase client code, and looking at the logs (thanks for the tip, #jeffrymorris!).
It turns out that if you don't have any preconfigured bucket, then the client will initially try to connect to localhost:8091 (event if that's not in your servers section), which can take seconds until it realizes there is no server there. This doesn't happen if you have any bucket configured.
So I had to change my configuration from this:
<couchbaseClients>
<couchbase useSsl="false" operationLifespan="1000">
<servers>
<!-- Ip addresses obscured... -->
<add uri="http://1.1.1.1:8091/pools"></add>
<add uri="http://1.1.1.2:8091/pools"></add>
</servers>
</couchbase>
</couchbaseClients>
To this:
<couchbaseClients>
<couchbase useSsl="false" operationLifespan="1000">
<servers>
<!-- Ip addresses obscured... -->
<add uri="http://1.1.1.1:8091/pools"></add>
<add uri="http://1.1.1.2:8091/pools"></add>
</servers>
<buckets>
<add name="default" useSsl="false" operationLifespan="1000">
</add>
</buckets>
</couchbase>
</couchbaseClients>
Note that I'm not actually going to use a bucket called default, but having the configuration there sill prevent the client from trying to connect to localhost. (This is happening in the method ConfigProviderBase.GetOrCreateConfiguration.)
Update: The issue has been fixed here: https://github.com/couchbase/couchbase-net-client/commit/020093b422a78728dd49d75d9fe9f1e00d01a0f2
I'm trying to use a Couchbase cluster hosted in Amazon EC2. The client I'm trying to use it from is hosted in Microsoft Azure.
The performance is terrible, in ~10% of the time opening a bucket takes a lot of time.
This is my configuration:
<couchbaseClients>
<couchbase useSsl="false" operationLifespan="1000">
<servers>
<!-- Ip addresses obscured... -->
<add uri="http://1.1.1.1:8091/pools"></add>
<add uri="http://1.1.1.2:8091/pools"></add>
</servers>
<buckets>
<add name="default" useSsl="false" operationLifespan="1000">
</add>
</buckets>
</couchbase>
</couchbaseClients>
This is the code I'm testing with:
var cluster = new Cluster("couchbaseClients/couchbase");
using (var bucket = cluster.OpenBucket("bucketname")) // This sometimes takes 3-50 seconds.
{
var obj = new TestClass { };
// This is fast
var result = bucket.Insert(new Document<TestClass> { Content = obj, Expiry = 300000, Id = Guid.NewGuid().ToString() });
}
Opening the Couchbase bucket sometimes (not always) takes a lot of time, anywhere between 3-50 seconds. It happens often enough that it makes it completely unusable.
When it happens, I can see the following error message in the Couchbase logs:
2015-12-10 14:18:57,644 [1] DEBUG Couchbase.Configuration.Server.Providers.ConfigProviderBase - Bootstrapping with 1.1.1.2:11210
2015-12-10 14:19:07,660 [1] INFO Couchbase.IO.ConnectionPool`1[[Couchbase.IO.Connection, Couchbase.NetClient, Version=2.2.2.0, Culture=neutral, PublicKeyToken=05e9c6b5a9ec94c2]] - Node 1.1.1.2:11210 failed to initialize, reason: System.Net.Sockets.SocketException (0x80004005): A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
at Couchbase.IO.DefaultConnectionFactory.<GetGeneric>b__0[T](IConnectionPool`1 p, IByteConverter c, BufferAllocator b)
at Couchbase.IO.ConnectionPool`1.Initialize()
Note that 10 seconds are passed there. (I obscured the IP-addresses.)
What can cause this problem and how could I troubleshoot?
This seems to be Azure-specific, I could not reproduce this on my local dev machine or on a machine hosted in the Google cloud. However, it is consistently happening on two different Azure VMs.