How to stop file transfer if browser is closed/upload cancelled - c#

I am uploading a file asynchronously with HTML5 in MVC3. If I have a large file, say 1GB in size, and after 50% upload completion I cancel the upload or close the browser, it still saves a 500MB file within the target folder.
How can I handle this problem within the controller and on the client side?
Here is my controller action:
[HttpPost]
public ActionResult Upload(object fileToUpload1)
{
var fileName = Request.Headers["X-File-Name"];
var fileSize = Request.Headers["X-File-Size"];
var fileType = Request.Headers["X-File-Type"];
Request.SaveAs("D:\\uploadimage\\" + fileName, false);
if (fileToUpload1 == null)
{
return Json(true, JsonRequestBehavior.AllowGet);
}
else { return Json(false, JsonRequestBehavior.AllowGet); }
// return Json(false, JsonRequestBehavior.AllowGet);
}
And here is the Javascript:
// Uploading - for Firefox, Google Chrome and Safari
xhr = new XMLHttpRequest();
// Update progress bar
xhr.upload.addEventListener("progress", uploadProgress, false);
function uploadProgress(evt) {
if (evt.lengthComputable) {
var percentComplete = Math.round(evt.loaded * 100 / evt.total);
//assign value to prgress bar Div
var progressBar = document.getElementById("progressBar");
progressBar.max = evt.total;
progressBar.value = evt.loaded;
}
}
// File load event
xhr.upload.addEventListener("load", loadSuccess, false);
function loadSuccess(evt) {
$(fileParentDivobj).find(".ImgDiv").find("span").html("uploaded");
addfile(fileParentDivobj);
}
//handling error
xhr.addEventListener("error", uploadFailed, false);
xhr.addEventListener("abort", uploadCanceled, false);
function uploadFailed(evt) {
alert("There was an error attempting to upload the file.");
}
function uploadCanceled(evt) {
alert("The upload has been canceled by the user or the browser dropped the connection.");
}
xhr.open("POST", "#Url.Action("Upload","Home")", true);
// Set appropriate headers
xhr.setRequestHeader("Cache-Control", "no-cache");
xhr.setRequestHeader("X-Requested-With", "XMLHttpRequest");
xhr.setRequestHeader("Content-Type", "multipart/form-data");
xhr.setRequestHeader("X-File-Name", file.fileName);
xhr.setRequestHeader("X-File-Size", file.fileSize);
xhr.setRequestHeader("X-File-Type", file.type);
xhr.setRequestHeader("X-File", file);
// Send the file (doh)
xhr.send(file);

First, this is not something that should be resolved using any client-side scripts as I don't believe you will be able to make new request when browser is closing and it certainly wouldn't work when connection is interrupted because of network problems.
So I did some digging and I haven't found anything in asp.net that would tell me that request connection was interrupted. However we can check how much data we have received and how much data we should have had received!
public ActionResult Upload()
{
// I like to keep all application data in App_Data, feel free to change this
var dir = Server.MapPath("~/App_Data");
if (!Directory.Exists(dir))
Directory.CreateDirectory(dir);
// extract file name from request and make sure it doesn't contain anything harmful
var name = Path.GetFileName(Request.Headers["X-File-Name"]);
foreach (var c in Path.GetInvalidFileNameChars())
name.Replace(c, '-');
// construct file path
var path = Path.Combine(dir, name);
// this variable will hold how much data we have received
var written = 0;
try
{
using (var output = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.None))
{
var buffer = new byte[0x1000];
var read = 0;
// while there is something to read, write it to output and increase counter
while ((read = Request.InputStream.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, read);
output.Flush();
written += read;
}
}
}
finally
{
// once finished (or when exception was thrown) check whether we have all data from the request
// and if not - delete the file
if (Request.ContentLength != written)
System.IO.File.Delete(path);
}
return Json(new { success = true });
}
Tested with your client-side code using using asp.net dev server and google chrome.
edit: just noticed Chuck Savage has posted this principle in comments earlier, so props to him :)

Focusing on Request is the reason for forgetting the Response which can tell if the client is still connected or not (Response.IsClientConnected).
By simply checking is there is something to read, you ignore the case of a possible long (how long?) network delay from the client side.
Use the Chunk and Lukas approach and incorporate the Response.IsClientConnected property and a thread Sleep of your choise in case there is nothing to read but the client is still connected. This way you will exit your read loop earlier if needed without generating a WTF from the client user.

Related

How to cancel a BlockBlobClient upload mid flow in c# in Blazor?

I am using Blazor Webassembly to upload files to a userupload area in Azure blob storage.
The code works fine, but I want to be able to offer the user the option to cancel (e.g. in a large upload of potentially a few files). Here's the code:
public async Task<InfoBool> HandleFilesUpload(FileChangedEventArgs e, IProgress<Tuple<int, int, string>> progressHandler,
IProgress<Tuple<int,string>> fileCountHandler, ExternalFileDTO fileTemplate, InfoBool isCancelling)
{
int FileCount =0;
fileTemplate.CreatedBy = _userservice.CurrentUser;
Tuple<int, string> reportfile;
Tuple<int, int, string> CountProgressName;
foreach (var file in e.Files)
{
FileCount++;
reportfile = Tuple.Create(FileCount, file.Name);
fileCountHandler.Report(reportfile);
try
{
if (file == null)
{
return new InfoBool(false, "File is null");
}
long filesize = file.Size;
if (filesize > maxFileSize)
{
return new InfoBool(false, "File exceeds Max Size");
}
fileTemplate.OriginalFileName = file.Name;
var sendfile = await _azureservice.GetAzureUploadURLFile(fileTemplate);
if (!sendfile.Status.Success) // There was an error so return the details
{
return sendfile.Status;
}
CurrentFiles.Add(sendfile); // Add the returned sendfile object to the list
BlockBlobClient blockBlobclient = new BlockBlobClient(sendfile.CloudURI);
byte[] buffer = new byte[BufferSize];
using (var bufferedStream = new BufferedStream(file.OpenReadStream(maxFileSize), BufferSize))
{
int readCount = 0;
int bytesRead;
long TotalBytesSent = 0;
// track the current block number as the code iterates through the file
int blockNumber = 0;
// Create list to track blockIds, it will be needed after the loop
List<string> blockList = new List<string>();
while ((bytesRead = await bufferedStream.ReadAsync(buffer, 0, BufferSize)) > 0)
{
blockNumber++;
// set block ID as a string and convert it to Base64 which is the required format
string blockId = $"{blockNumber:0000000}";
string base64BlockId = Convert.ToBase64String(Encoding.UTF8.GetBytes(blockId));
Console.WriteLine($"Read:{readCount++} {bytesRead / (double)BufferSize} MB");
// Do work on the block of data
await blockBlobclient.StageBlockAsync(base64BlockId, new MemoryStream(buffer, 0, bytesRead));
// add the current blockId into our list
blockList.Add(base64BlockId);
TotalBytesSent += bytesRead;
int PercentageSent = (int)(TotalBytesSent * 100 / filesize);
CountProgressName = Tuple.Create(FileCount, PercentageSent, file.Name);
if (isCancelling.Success) //Used an InfoBool so its passed by reference
{
break; // The user has cancelled, so wind up.
}
progressHandler.Report(CountProgressName);
}
// add the blockList to the Azure which allows the resource to stick together the chunks
if (isCancelling.Success)
{
await blockBlobclient.DeleteIfExistsAsync(); // Delete the blob created
}
else
{
await blockBlobclient.CommitBlockListAsync(blockList);
}
// make sure to dispose the stream once your are done
bufferedStream.Dispose(); // Belt and braces
}
//
// Now make a server API call to verify the file upload was successful
//
// Set the file status to the cancelling, cos the server can delete the file.
sendfile.Status = isCancelling;
await _azureservice.VerifyFileAsync(sendfile);
if (isCancelling.Success)
{
break; // This breaks out of the foreach loop
}
}
catch (Exception exc)
{
Console.WriteLine(exc.Message);
}
finally
{
}
}
return new InfoBool(true, "All Ok");
The server creates a DB record for the upload and returns a SAS Key in the 'GetAzureUploadURLFile' which is then used to upload a blockblob and commit. Once done, the server is called again to tell it that the client has finished the upload. This allows the server to check it exists and verify the the content.
I have managed to provide an 'isCancelling' link to the UI thread by way of a general purpose class I use called InfoBool that will pass by reference.
After each block the progress handlers are called, and the 'InfoBool' is polled. So far so good.
If isCancelling, the innerloop breaks and then I need to tidy up.
The code as it is currently shows trying to just delete the blockBlobClient instead of commiting it.
That fails with an exception, I thought maybe its because it hadn't been comitted, and tried comitting the blockblobClient before deleting it, but the same result.
I have looked at the docs and it refers to abort mechanisms, but I can't see any examples of how to set up and use them. Anybody know how to abort the blockblobclient upload I have constructed?
Many thanks.

C# Server downloads the file and transfers it to the user at the same time. Download fails [Google Drive]

I'm writing a program with ASP.NET Core.
The program will download the file from Google Drive without storing it in Memory or disk and transfer it to the user.
Download function in Google Drive official libraries does not continue until the download has finished. For this reason, I send a normal GET request to the API and read the file as a stream and return to the user.
But after a certain size, downloading in programs like IDM or browser results in an error.
In short, I want to make a program that uses the server as a bridge, and it should not be interrupted.
[HttpGet]
[DisableRequestSizeLimit]
public async Task<FileStreamResult> Download([FromQuery(Name = "file")] string fileid)
{
if (!string.IsNullOrEmpty(fileid))
{
var decoded = Base64.Base64Decode(fileid);
var file = DriveAPI.service.Files.Get(decoded);
file.SupportsAllDrives = true;
file.SupportsTeamDrives = true;
var fileinf = file.Execute();
var filesize = fileinf.FileSize;
var cli = new HttpClient(DriveAPI.service.HttpClient.MessageHandler);
//var req = await cli.SendAsync(file.CreateRequest());
var req = await cli.GetAsync($"https://www.googleapis.com/drive/v2/files/{decoded}?alt=media", HttpCompletionOption.ResponseHeadersRead);
//var req = await DriveAPI.service.HttpClient.GetAsync($"https://www.googleapis.com/drive/v2/files/{decoded}?alt=media", HttpCompletionOption.ResponseHeadersRead);
var contenttype = req.Content.Headers.ContentType.MediaType;
if (contenttype == "application/json")
{
var message = JObject.Parse(req.Content.ReadAsStringAsync().Result).SelectToken("error.message");
if (message.ToString() == "The download quota for this file has been exceeded")
{
throw new Exception("Google Drive Günlük İndirme Kotası Aşıldı. Lütfen 24-48 Saat Sonra Tekrar Deneyin.");
}
else
{
throw new Exception(message.ToString());
}
}
else
{
return File(req.Content.ReadAsStream(), contenttype, fileinf.OriginalFilename, false);
}
}
else
{
return null;
}
}
Some errors are written to the log file when downloading:
Received an unexpected EOF or 0 bytes from the transport stream.
Unable to read data from the transport connection.
etc.
If user is using IDM, the error is:
Server sent wrong answer on restart command
If user is downloading from browser, the error is:
Network Error
I started a 1.5Gb file download with 8 mbps internet when approximate download size is 900Mb, IDM stopped download with said error.
I have no idea other than returning FileStreamResult in ASP.NET Core - how to download concurrent files?

How to check file content is missing in File Upload Web API

I have Web API to upload a single file which is sent along the request body. I have the below code to read file binaries from the stream
Task<HttpResponseMessage> task = Request.Content.ReadAsStreamAsync().ContinueWith<HttpResponseMessage>(t =>
{
if (t.IsFaulted || t.IsCanceled)
throw new HttpResponseException(HttpStatusCode.InternalServerError);
try
{
using (Stream stream = t.Result)
{
using (MemoryStream ms = new MemoryStream())
{
stream.Seek(0, SeekOrigin.Begin);
stream.CopyTo(ms);
byte[] fileBinaries = ms.ToArray();
//logic to process the file
}
}
}
catch (Exception e)
{
//exception handling logic
}
return Request.CreateResponse(HttpStatusCode.Created);
});
return task;
The API works fine when called with file being uploaded; and returns Http status code 201. But if I didn't attached file to the API call still it returns the same code as there is no check on binary data received. I want to add that check to return appropriate error message to user.
I tried to perform this check by evaluating the length of fileBinaries byte array read from Request.Content. But the array has few bytes in it which represents text [object FileList] (don't know how this bytes are filled in the array as i haven't attached any file with the API call). So this won't work for me.
I also tried using the HttpContext.Current.Request.Files.Count but it always returns 0 (Probably due to the file binaries sent in request body) so not suitable for my check.
I can't rely on any headers like File Name as those are not sent in request.
Any help how to perform this?
Try using MultipartMemoryStreamProvider which is ideal for file uploads using webapi
public async Task<IHttpActionResult> UploadFile()
{
var filesReadToProvider = await Request.Content.ReadAsMultipartAsync();
foreach (var stream in filesReadToProvider.Contents)
{
var fileBytes = await stream.ReadAsByteArrayAsync();
}
}
Here fileBytes may not have those 17 bytes.

Video Progressive Download - can't not seek in chrome browser

I'm trying to play in Chrome Browser video with source from Web Api
<video id="TestVideo" class="dtm-video-element" controls="">
<source src="https://localhost:44305/Api/FilesController/Stream/Get" id="TestSource" type="video/mp4" />
</video>
In order to implement progressive downloading I'm using PushStreamContent in server response
httpResponce.Content = new PushStreamContent((Action<Stream, HttpContent, TransportContext>)new StreamService(fileName,httpResponce).WriteContentToStream);
public async void WriteContentToStream(Stream outputStream, HttpContent content, TransportContext transportContext)
{
//here set the size of buffer
int bufferSize = 1024;
byte[] buffer = new byte[bufferSize];
//here we re using stream to read file from db server
using (var fileStream = IOC.Container.Resolve<IMongoCommonService>().GridRecordFiles.GetFileAsStream(_fileName))
{
int totalSize = (int)fileStream.Length;
/*here we are saying read bytes from file as long as total size of file
is greater then 0*/
_response.Content.Headers.Add("Content-Length", fileStream.Length.ToString());
// _response.Content.Headers.Add("Content-Range", "bytes 0-"+ totalSize.ToString()+"/"+ fileStream.Length);
while (totalSize > 0)
{
int count = totalSize > bufferSize ? bufferSize : totalSize;
//here we are reading the buffer from orginal file
int sizeOfReadedBuffer = fileStream.Read(buffer, 0, count);
//here we are writing the readed buffer to output//
await outputStream.WriteAsync(buffer, 0, sizeOfReadedBuffer);
//and finally after writing to output stream decrementing it to total size of file.
totalSize -= sizeOfReadedBuffer;
}
}
}
After I load page video start to play immediately, but I can not seek for previous (already played) seconds of video or rewind it as well in Google Chrome browser. When I try to do this, video goes back to the beggining.
But in Firefox and Edge it's working like it should be, I can go back to already played part. I don't know how to solve this issue in Google Chrome Browser
You should use HTTP partial content. As it described here:
As it turns out, looping (or any sort of seeking, for that matter) in elements on Chrome only works if the video file was served up by a server that understands partial content requests.
So there are some articles that may help you to implement it. Try these links:
HTTP 206 Partial Content In ASP.NET Web API - Video File Streaming
How to work with HTTP Range Headers in WebAPI
Here is an implementation of responding to Range requests correctly - it reads a video from a file and returns it to the browser as a stream, so it doesnt eat up your server's ram. You get the chance to decide the security you want to apply etc in code.
[HttpGet]
public HttpResponseMessage Video(string id)
{
bool rangeMode = false;
int startByte = 0;
if (Request.Headers.Range != null)
if (Request.Headers.Range.Ranges.Any())
{
rangeMode = true;
var range = Request.Headers.Range.Ranges.First();
startByte = Convert.ToInt32(range.From ?? 0);
}
var stream = new FileStream(/* FILE NAME - convert id to file somehow */, FileMode.Open, FileAccess.Read, FileShare.ReadWrite) {Position = startByte};
if (rangeMode)
{
HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.PartialContent)
{
Content = new ByteRangeStreamContent(stream, Request.Headers.Range, MediaTypeHeaderValue.Parse(fileDetails.MimeType))
};
response.Headers.AcceptRanges.Add("bytes");
return response;
}
else
{
HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(stream)
};
response.Content.Headers.ContentType = MediaTypeHeaderValue.Parse(fileDetails.MimeType);
return response;
}
}

Accessing files on mssql filestore through UNC path is causing delay c#

I am experiencing some strange behaviour from my code which i am using to stream files to my clients.
I have a mssql server which acts as a filestore, with files that is accessed via an UNC path.
On my webserver i have some .net code running that handles streaming the files (in this case pictures and thumbnails) to my clients.
My code works, but i am experiencing a constant delay of ~12 sec on the initial file request. When i have made the initial request it is as the server wakes up and suddenly becomes responsive only to fall back to the same behaviour some time after.
At first i thought it was my code, but from what i can see on the server activity log there is no ressource intensive code going on. My theory is that at each call to the server the path must first be mounted and that is what causes the delay. It will then unmount some time after and will have to remount.
For reference i am posting my code (maybe i just cannot see the problem):
public async static Task StreamFileAsync(HttpContext context, FileInfo fileInfo)
{
//This controls how many bytes to read at a time and send to the client
int bytesToRead = 512 * 1024; // 512KB
// Buffer to read bytes in chunk size specified above
byte[] buffer = new Byte[bytesToRead];
// Clear the current response content/headers
context.Response.Clear();
context.Response.ClearHeaders();
//Indicate the type of data being sent
context.Response.ContentType = FileTools.GetMimeType(fileInfo.Extension);
//Name the file
context.Response.AddHeader("Content-Disposition", "filename=\"" + fileInfo.Name + "\"");
context.Response.AddHeader("Content-Length", fileInfo.Length.ToString());
// Open the file
using (var stream = fileInfo.OpenRead())
{
// The number of bytes read
int length;
do
{
// Verify that the client is connected
if (context.Response.IsClientConnected)
{
// Read data into the buffer
length = await stream.ReadAsync(buffer, 0, bytesToRead);
// and write it out to the response's output stream
await context.Response.OutputStream.WriteAsync(buffer, 0, length);
try
{
// Flush the data
context.Response.Flush();
}
catch (HttpException)
{
// Cancel the download if a HttpException happens
// (ie. the client has disconnected by we tried to send some data)
length = -1;
}
//Clear the buffer
buffer = new Byte[bytesToRead];
}
else
{
// Cancel the download if client has disconnected
length = -1;
}
} while (length > 0); //Repeat until no data is read
}
// Tell the response not to send any more content to the client
context.Response.SuppressContent = true;
// Tell the application to skip to the EndRequest event in the HTTP pipeline
context.ApplicationInstance.CompleteRequest();
}
If anyone could shed some light over this problem i would be very grateful!

Categories