.NET Core 2.0 MVC file upload progress - c#

So I've had a hell of a time trying to display a progress bar for my .Net Core MVC app and the official documentation has not been very helpful.
Docs here: https://learn.microsoft.com/en-us/aspnet/core/mvc/models/file-uploads?view=aspnetcore-2.0#uploading-large-files-with-streaming
I also want to upload the file to Azure blob storage when it gets to my controllers.
The user can upload as many files as they like.
Here is my code for uploading:
for (int i = 0; i < videoFile.Count; i++)
{
long totalBytes = videoFile[i].Length;
byte[] buffer = new byte[16 * 1024];
using (Stream input = videoFile[i].OpenReadStream())
{
long totalReadBytes = 0;
int readBytes;
while ((readBytes = input.Read(buffer, 0, buffer.Length)) > 0)
{
totalReadBytes += readBytes;
var progress = (int)((float)totalReadBytes / (float)totalBytes * 100.0);
}
}
String videoPath = videoFile[i].FileName;
await sc.UploadBlobAsync(groupContainer, videoPath, videoFile[i]);
}
And here is my UploadBlobAsync method:
public async Task<bool> UploadBlobAsync(string blobContainer, string blobName, IFormFile file) {
CloudBlobContainer container = await GetContainerAsync(blobContainer);
CloudBlockBlob blob = container.GetBlockBlobReference(blobName);
CancellationToken cancellationToken = new CancellationToken();
IProgress<StorageProgress> progressHandler = new Progress<StorageProgress>(
progress => Console.WriteLine("Progress: {0} bytes transferred", progress.BytesTransferred)
);
using (var filestream = file.OpenReadStream()) {
await blob.UploadFromStreamAsync(filestream,
default(AccessCondition),
default(BlobRequestOptions),
default(OperationContext),
progressHandler,
cancellationToken);
}
return true;
}
What I'd want to know is:
To my understanding, I would have to do 2 progress bars, 1 for the client machine to my server, than another from my server to azure. Is this correct?
How do I display the progress for each of my files to the frontend? I'd imagine it would be an ajax request to a List[i] I set up in my Controller?
Am I reading the bytes in the while loop when the file is already buffered? If I can access the file stream, isn't the file already buffered on the server?
How can I make use of Azure's IProgress implementation to return the result to me when it changes? Console.Writeline does not seem to work.

From another angle, you should try using the checking mechanism of a blob upload
progress, a sample is bellow:
var speedSummary = blobService.createBlockBlobFromStream('mycontainer', file.name, fileStream, file.size, function(error, result, response) {...}
setTimeout(function() {
if (!finishedOrError) {
var process = speedSummary.getCompletePercent();
displayProcess(process);
refreshProgress();
}
}, 200);
You can find more details here: https://github.com/Azure/azure-storage-node/issues/310
1- You can only use one progress bar
2- You could use the progress mechanism recently published in github based on a blob upload.

You are close, you need to provide a StorageProgress instance to your handler before calling UploadFromStreamAsync:
long byteLength = 1048576L; // 1MB at time
StorageProgress storageProgress = new StorageProgress(byteLength);
progressHandler.Report(storageProgress);

Related

How to increase performance for converting docx to pdf in Microsoft Graph Api

We are doing a docx-PDF conversion by uploading a Word-document(docx) by first uploading it using the large document upload in Microsoft Graph API (link) and after that downloading it to PDF-format (link). However, the whole process (upload + download) takes around 10-15s (file sizes around 5-15 MB) to complete and we would like to know if there are any possibilities to improve the performance?
Code sample from Microsoft for the file upload:
using (var fileStream = System.IO.File.OpenRead(filePath))
{
// Use properties to specify the conflict behavior
// in this case, replace
var uploadProps = new DriveItemUploadableProperties
{
ODataType = null,
AdditionalData = new Dictionary<string, object>
{
{ "#microsoft.graph.conflictBehavior", "replace" }
}
};
// Create the upload session
// itemPath does not need to be a path to an existing item
var uploadSession = await graphClient.Me.Drive.Root
.ItemWithPath(itemPath)
.CreateUploadSession(uploadProps)
.Request()
.PostAsync();
// Max slice size must be a multiple of 320 KiB
int maxSliceSize = 320 * 1024;
var fileUploadTask =
new LargeFileUploadTask<DriveItem>(uploadSession, fileStream, maxSliceSize);
// Create a callback that is invoked after each slice is uploaded
IProgress<long> progress = new Progress<long>(prog => {
Console.WriteLine($"Uploaded {prog} bytes of {fileStream.Length} bytes");
});
try
{
// Upload the file
var uploadResult = await fileUploadTask.UploadAsync(progress);
if (uploadResult.UploadSucceeded)
{
// The ItemResponse object in the result represents the
// created item.
Console.WriteLine($"Upload complete, item ID: {uploadResult.ItemResponse.Id}");
}
else
{
Console.WriteLine("Upload failed");
}
}
catch (ServiceException ex)
{
Console.WriteLine($"Error uploading: {ex.ToString()}");
}
}
Here I am unsure what the maxSliceSize should be, the docs merely state that it should be a "Multiple of 320 KiB (320 * 1024)" (ref). Does this affect the performance in any way?
Code sample for download:
private IDriveRequestBuilder GetDrive()
{
var graphServiceClient = new GraphServiceClient(_authenctication)
return graphServiceClient.Sites[_oneDriveOptions.SiteId].Drive;
}
private async Task<Stream> GetPdf(string driveItemId)
{
var pdfContentRequest = GetDrive().Items[driveItemId].Content.Request();
pdfContentRequest.QueryOptions.Add(new QueryOption("format", "pdf"));
var result = await pdfContentRequest.GetAsync();
return result;
}
Is 10-15s a reasonable amount of time to process both requests? I would guess this would be much faster if doing in memory but we haven't found any good enough library for conversion that suits our needs so we are currently stuck with the current approach. Maybe uploading/downloading to Onedrive adds a lot of overhead?

How to cancel a BlockBlobClient upload mid flow in c# in Blazor?

I am using Blazor Webassembly to upload files to a userupload area in Azure blob storage.
The code works fine, but I want to be able to offer the user the option to cancel (e.g. in a large upload of potentially a few files). Here's the code:
public async Task<InfoBool> HandleFilesUpload(FileChangedEventArgs e, IProgress<Tuple<int, int, string>> progressHandler,
IProgress<Tuple<int,string>> fileCountHandler, ExternalFileDTO fileTemplate, InfoBool isCancelling)
{
int FileCount =0;
fileTemplate.CreatedBy = _userservice.CurrentUser;
Tuple<int, string> reportfile;
Tuple<int, int, string> CountProgressName;
foreach (var file in e.Files)
{
FileCount++;
reportfile = Tuple.Create(FileCount, file.Name);
fileCountHandler.Report(reportfile);
try
{
if (file == null)
{
return new InfoBool(false, "File is null");
}
long filesize = file.Size;
if (filesize > maxFileSize)
{
return new InfoBool(false, "File exceeds Max Size");
}
fileTemplate.OriginalFileName = file.Name;
var sendfile = await _azureservice.GetAzureUploadURLFile(fileTemplate);
if (!sendfile.Status.Success) // There was an error so return the details
{
return sendfile.Status;
}
CurrentFiles.Add(sendfile); // Add the returned sendfile object to the list
BlockBlobClient blockBlobclient = new BlockBlobClient(sendfile.CloudURI);
byte[] buffer = new byte[BufferSize];
using (var bufferedStream = new BufferedStream(file.OpenReadStream(maxFileSize), BufferSize))
{
int readCount = 0;
int bytesRead;
long TotalBytesSent = 0;
// track the current block number as the code iterates through the file
int blockNumber = 0;
// Create list to track blockIds, it will be needed after the loop
List<string> blockList = new List<string>();
while ((bytesRead = await bufferedStream.ReadAsync(buffer, 0, BufferSize)) > 0)
{
blockNumber++;
// set block ID as a string and convert it to Base64 which is the required format
string blockId = $"{blockNumber:0000000}";
string base64BlockId = Convert.ToBase64String(Encoding.UTF8.GetBytes(blockId));
Console.WriteLine($"Read:{readCount++} {bytesRead / (double)BufferSize} MB");
// Do work on the block of data
await blockBlobclient.StageBlockAsync(base64BlockId, new MemoryStream(buffer, 0, bytesRead));
// add the current blockId into our list
blockList.Add(base64BlockId);
TotalBytesSent += bytesRead;
int PercentageSent = (int)(TotalBytesSent * 100 / filesize);
CountProgressName = Tuple.Create(FileCount, PercentageSent, file.Name);
if (isCancelling.Success) //Used an InfoBool so its passed by reference
{
break; // The user has cancelled, so wind up.
}
progressHandler.Report(CountProgressName);
}
// add the blockList to the Azure which allows the resource to stick together the chunks
if (isCancelling.Success)
{
await blockBlobclient.DeleteIfExistsAsync(); // Delete the blob created
}
else
{
await blockBlobclient.CommitBlockListAsync(blockList);
}
// make sure to dispose the stream once your are done
bufferedStream.Dispose(); // Belt and braces
}
//
// Now make a server API call to verify the file upload was successful
//
// Set the file status to the cancelling, cos the server can delete the file.
sendfile.Status = isCancelling;
await _azureservice.VerifyFileAsync(sendfile);
if (isCancelling.Success)
{
break; // This breaks out of the foreach loop
}
}
catch (Exception exc)
{
Console.WriteLine(exc.Message);
}
finally
{
}
}
return new InfoBool(true, "All Ok");
The server creates a DB record for the upload and returns a SAS Key in the 'GetAzureUploadURLFile' which is then used to upload a blockblob and commit. Once done, the server is called again to tell it that the client has finished the upload. This allows the server to check it exists and verify the the content.
I have managed to provide an 'isCancelling' link to the UI thread by way of a general purpose class I use called InfoBool that will pass by reference.
After each block the progress handlers are called, and the 'InfoBool' is polled. So far so good.
If isCancelling, the innerloop breaks and then I need to tidy up.
The code as it is currently shows trying to just delete the blockBlobClient instead of commiting it.
That fails with an exception, I thought maybe its because it hadn't been comitted, and tried comitting the blockblobClient before deleting it, but the same result.
I have looked at the docs and it refers to abort mechanisms, but I can't see any examples of how to set up and use them. Anybody know how to abort the blockblobclient upload I have constructed?
Many thanks.

Stream videos from Azure blob storage and ASP.NET Core 3

I'm using latest and recommended Azure.Storage.Blobs package. I'm uploading the video file as chunks, which works fine. The problem is now returning back the video to the web client, which is videojs. The player is using Range request.
My endpoint:
[HttpGet]
[Route("video/{id}")]
[AllowAnonymous]
public async Task<IActionResult> GetVideoStreamAsync(string id)
{
var stream = await GetVideoFile(id);
return File(stream, "video/mp4", true); // true is for enableRangeProcessing
}
And my GetVideoFile method
var ms = new MemoryStream();
await blobClient.DownloadToAsync(ms, null, new StorageTransferOptions
{
InitialTransferLength = 1024 * 1024,
MaximumConcurrency = 20,
MaximumTransferLength = 4 * 1024 * 1024
});
ms.Position = 0;
return ms;
The video gets downloaded and streamed just fine. But it downloads the whole video and not respecting Range at all. I've also tried with DownloadTo(HttpRange)
var ms = new MemoryStream();
// parse range header...
var range = new HttpRange(from, to);
BlobDownloadInfo info = await blobClient.DownloadAsync(range);
await info.Content.CopyToAsync(ms);
return ms;
But nothing gets displayed in the browser. What is the best way to achieve that?
Answering my own question if someone comes across.
CloudBlockBlob (version I'm using: 11.2.2) now has OpenReadAsync() method which returns stream. In my case I'm returning this stream to videojs which handles the Range header on its own.
Please try by resetting the memory stream's position to 0 before returning:
var ms = new MemoryStream();
// parse range header...
var range = new HttpRange(from, to);
BlobDownloadInfo info = await blobClient.DownloadAsync(range);
await info.Content.CopyToAsync(ms);
ms.Position = 0;//ms is positioned at the end of the stream so we need to reset that.
return ms;
I believe it's not possible to achieve it only using Azure Blob. More info in here: https://stackoverflow.com/a/26053910/1384539
but in summary, you can use a CDN that offers the Seek Start / End position : https://docs.vdms.com/cdn/re3/Content/Streaming/HPD/Seeking_Within_a_Video.htm
Another possibility is to use Azure Media Services that supports streamming. Your approach is actually a progressive download which is not exactly the same idea, and you'd probably spend a lot with network out. (assuming you have many access to the same file)

Video Progressive Download - can't not seek in chrome browser

I'm trying to play in Chrome Browser video with source from Web Api
<video id="TestVideo" class="dtm-video-element" controls="">
<source src="https://localhost:44305/Api/FilesController/Stream/Get" id="TestSource" type="video/mp4" />
</video>
In order to implement progressive downloading I'm using PushStreamContent in server response
httpResponce.Content = new PushStreamContent((Action<Stream, HttpContent, TransportContext>)new StreamService(fileName,httpResponce).WriteContentToStream);
public async void WriteContentToStream(Stream outputStream, HttpContent content, TransportContext transportContext)
{
//here set the size of buffer
int bufferSize = 1024;
byte[] buffer = new byte[bufferSize];
//here we re using stream to read file from db server
using (var fileStream = IOC.Container.Resolve<IMongoCommonService>().GridRecordFiles.GetFileAsStream(_fileName))
{
int totalSize = (int)fileStream.Length;
/*here we are saying read bytes from file as long as total size of file
is greater then 0*/
_response.Content.Headers.Add("Content-Length", fileStream.Length.ToString());
// _response.Content.Headers.Add("Content-Range", "bytes 0-"+ totalSize.ToString()+"/"+ fileStream.Length);
while (totalSize > 0)
{
int count = totalSize > bufferSize ? bufferSize : totalSize;
//here we are reading the buffer from orginal file
int sizeOfReadedBuffer = fileStream.Read(buffer, 0, count);
//here we are writing the readed buffer to output//
await outputStream.WriteAsync(buffer, 0, sizeOfReadedBuffer);
//and finally after writing to output stream decrementing it to total size of file.
totalSize -= sizeOfReadedBuffer;
}
}
}
After I load page video start to play immediately, but I can not seek for previous (already played) seconds of video or rewind it as well in Google Chrome browser. When I try to do this, video goes back to the beggining.
But in Firefox and Edge it's working like it should be, I can go back to already played part. I don't know how to solve this issue in Google Chrome Browser
You should use HTTP partial content. As it described here:
As it turns out, looping (or any sort of seeking, for that matter) in elements on Chrome only works if the video file was served up by a server that understands partial content requests.
So there are some articles that may help you to implement it. Try these links:
HTTP 206 Partial Content In ASP.NET Web API - Video File Streaming
How to work with HTTP Range Headers in WebAPI
Here is an implementation of responding to Range requests correctly - it reads a video from a file and returns it to the browser as a stream, so it doesnt eat up your server's ram. You get the chance to decide the security you want to apply etc in code.
[HttpGet]
public HttpResponseMessage Video(string id)
{
bool rangeMode = false;
int startByte = 0;
if (Request.Headers.Range != null)
if (Request.Headers.Range.Ranges.Any())
{
rangeMode = true;
var range = Request.Headers.Range.Ranges.First();
startByte = Convert.ToInt32(range.From ?? 0);
}
var stream = new FileStream(/* FILE NAME - convert id to file somehow */, FileMode.Open, FileAccess.Read, FileShare.ReadWrite) {Position = startByte};
if (rangeMode)
{
HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.PartialContent)
{
Content = new ByteRangeStreamContent(stream, Request.Headers.Range, MediaTypeHeaderValue.Parse(fileDetails.MimeType))
};
response.Headers.AcceptRanges.Add("bytes");
return response;
}
else
{
HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(stream)
};
response.Content.Headers.ContentType = MediaTypeHeaderValue.Parse(fileDetails.MimeType);
return response;
}
}

How to stop file transfer if browser is closed/upload cancelled

I am uploading a file asynchronously with HTML5 in MVC3. If I have a large file, say 1GB in size, and after 50% upload completion I cancel the upload or close the browser, it still saves a 500MB file within the target folder.
How can I handle this problem within the controller and on the client side?
Here is my controller action:
[HttpPost]
public ActionResult Upload(object fileToUpload1)
{
var fileName = Request.Headers["X-File-Name"];
var fileSize = Request.Headers["X-File-Size"];
var fileType = Request.Headers["X-File-Type"];
Request.SaveAs("D:\\uploadimage\\" + fileName, false);
if (fileToUpload1 == null)
{
return Json(true, JsonRequestBehavior.AllowGet);
}
else { return Json(false, JsonRequestBehavior.AllowGet); }
// return Json(false, JsonRequestBehavior.AllowGet);
}
And here is the Javascript:
// Uploading - for Firefox, Google Chrome and Safari
xhr = new XMLHttpRequest();
// Update progress bar
xhr.upload.addEventListener("progress", uploadProgress, false);
function uploadProgress(evt) {
if (evt.lengthComputable) {
var percentComplete = Math.round(evt.loaded * 100 / evt.total);
//assign value to prgress bar Div
var progressBar = document.getElementById("progressBar");
progressBar.max = evt.total;
progressBar.value = evt.loaded;
}
}
// File load event
xhr.upload.addEventListener("load", loadSuccess, false);
function loadSuccess(evt) {
$(fileParentDivobj).find(".ImgDiv").find("span").html("uploaded");
addfile(fileParentDivobj);
}
//handling error
xhr.addEventListener("error", uploadFailed, false);
xhr.addEventListener("abort", uploadCanceled, false);
function uploadFailed(evt) {
alert("There was an error attempting to upload the file.");
}
function uploadCanceled(evt) {
alert("The upload has been canceled by the user or the browser dropped the connection.");
}
xhr.open("POST", "#Url.Action("Upload","Home")", true);
// Set appropriate headers
xhr.setRequestHeader("Cache-Control", "no-cache");
xhr.setRequestHeader("X-Requested-With", "XMLHttpRequest");
xhr.setRequestHeader("Content-Type", "multipart/form-data");
xhr.setRequestHeader("X-File-Name", file.fileName);
xhr.setRequestHeader("X-File-Size", file.fileSize);
xhr.setRequestHeader("X-File-Type", file.type);
xhr.setRequestHeader("X-File", file);
// Send the file (doh)
xhr.send(file);
First, this is not something that should be resolved using any client-side scripts as I don't believe you will be able to make new request when browser is closing and it certainly wouldn't work when connection is interrupted because of network problems.
So I did some digging and I haven't found anything in asp.net that would tell me that request connection was interrupted. However we can check how much data we have received and how much data we should have had received!
public ActionResult Upload()
{
// I like to keep all application data in App_Data, feel free to change this
var dir = Server.MapPath("~/App_Data");
if (!Directory.Exists(dir))
Directory.CreateDirectory(dir);
// extract file name from request and make sure it doesn't contain anything harmful
var name = Path.GetFileName(Request.Headers["X-File-Name"]);
foreach (var c in Path.GetInvalidFileNameChars())
name.Replace(c, '-');
// construct file path
var path = Path.Combine(dir, name);
// this variable will hold how much data we have received
var written = 0;
try
{
using (var output = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.None))
{
var buffer = new byte[0x1000];
var read = 0;
// while there is something to read, write it to output and increase counter
while ((read = Request.InputStream.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, read);
output.Flush();
written += read;
}
}
}
finally
{
// once finished (or when exception was thrown) check whether we have all data from the request
// and if not - delete the file
if (Request.ContentLength != written)
System.IO.File.Delete(path);
}
return Json(new { success = true });
}
Tested with your client-side code using using asp.net dev server and google chrome.
edit: just noticed Chuck Savage has posted this principle in comments earlier, so props to him :)
Focusing on Request is the reason for forgetting the Response which can tell if the client is still connected or not (Response.IsClientConnected).
By simply checking is there is something to read, you ignore the case of a possible long (how long?) network delay from the client side.
Use the Chunk and Lukas approach and incorporate the Response.IsClientConnected property and a thread Sleep of your choise in case there is nothing to read but the client is still connected. This way you will exit your read loop earlier if needed without generating a WTF from the client user.

Categories