File uploading and processing in ASP MVC - c#

I'm need to implement next page:
User click on button Upload file, select file and uploading it on server. After uploading file will be processed by converter. Converter can return percentage of conversion. How to implement continuous progress bar on page (progress = upload progress + convert progress)?
I'm using PlUpload - this tool can return percentage of uploading file to server, but I can't override returning percentage.
That my upload action:
public ActionResult ConferenceAttachment(int? chunk, string name, Identity cid)
{
var fileUpload = Request.Files[0];
var tempfolder = Path.GetTempFileName().Replace('.', '-');
Directory.CreateDirectory(tempfolder);
var fn = Path.Combine(tempfolder, name);
chunk = chunk ?? 0;
using (var fs = new FileStream(fn, chunk == 0 ? FileMode.Create : FileMode.Append))
{
var buffer = new byte[fileUpload.InputStream.Length];
fileUpload.InputStream.Read(buffer, 0, buffer.Length);
fs.Write(buffer, 0, buffer.Length);
}
// CONVERTING ....
return Content("OK", "text/plain");
}
Which architecture solution can solve my problem? Or which JS upload library?

You must use some kind of Real-Time connection or a Streaming to do this.
1) Using SignalR you can create a Hub to notificate the client about the progress, easy to implement and powerfull.
Learn About ASP.NET SignalR
2) You can adapte this example about PushStreamContent to send the progress while you are converting:
Asynchronously streaming video with ASP.NET Web API

You can set buffer size as much as you want. But, I would say to upload 8kb data of file at a time & meanwhile start streaming conversion process asynchronously. But, for smooth progress again what kind of file you wanna upload?

Related

Stream videos from Azure blob storage and ASP.NET Core 3

I'm using latest and recommended Azure.Storage.Blobs package. I'm uploading the video file as chunks, which works fine. The problem is now returning back the video to the web client, which is videojs. The player is using Range request.
My endpoint:
[HttpGet]
[Route("video/{id}")]
[AllowAnonymous]
public async Task<IActionResult> GetVideoStreamAsync(string id)
{
var stream = await GetVideoFile(id);
return File(stream, "video/mp4", true); // true is for enableRangeProcessing
}
And my GetVideoFile method
var ms = new MemoryStream();
await blobClient.DownloadToAsync(ms, null, new StorageTransferOptions
{
InitialTransferLength = 1024 * 1024,
MaximumConcurrency = 20,
MaximumTransferLength = 4 * 1024 * 1024
});
ms.Position = 0;
return ms;
The video gets downloaded and streamed just fine. But it downloads the whole video and not respecting Range at all. I've also tried with DownloadTo(HttpRange)
var ms = new MemoryStream();
// parse range header...
var range = new HttpRange(from, to);
BlobDownloadInfo info = await blobClient.DownloadAsync(range);
await info.Content.CopyToAsync(ms);
return ms;
But nothing gets displayed in the browser. What is the best way to achieve that?
Answering my own question if someone comes across.
CloudBlockBlob (version I'm using: 11.2.2) now has OpenReadAsync() method which returns stream. In my case I'm returning this stream to videojs which handles the Range header on its own.
Please try by resetting the memory stream's position to 0 before returning:
var ms = new MemoryStream();
// parse range header...
var range = new HttpRange(from, to);
BlobDownloadInfo info = await blobClient.DownloadAsync(range);
await info.Content.CopyToAsync(ms);
ms.Position = 0;//ms is positioned at the end of the stream so we need to reset that.
return ms;
I believe it's not possible to achieve it only using Azure Blob. More info in here: https://stackoverflow.com/a/26053910/1384539
but in summary, you can use a CDN that offers the Seek Start / End position : https://docs.vdms.com/cdn/re3/Content/Streaming/HPD/Seeking_Within_a_Video.htm
Another possibility is to use Azure Media Services that supports streamming. Your approach is actually a progressive download which is not exactly the same idea, and you'd probably spend a lot with network out. (assuming you have many access to the same file)

Video Progressive Download - can't not seek in chrome browser

I'm trying to play in Chrome Browser video with source from Web Api
<video id="TestVideo" class="dtm-video-element" controls="">
<source src="https://localhost:44305/Api/FilesController/Stream/Get" id="TestSource" type="video/mp4" />
</video>
In order to implement progressive downloading I'm using PushStreamContent in server response
httpResponce.Content = new PushStreamContent((Action<Stream, HttpContent, TransportContext>)new StreamService(fileName,httpResponce).WriteContentToStream);
public async void WriteContentToStream(Stream outputStream, HttpContent content, TransportContext transportContext)
{
//here set the size of buffer
int bufferSize = 1024;
byte[] buffer = new byte[bufferSize];
//here we re using stream to read file from db server
using (var fileStream = IOC.Container.Resolve<IMongoCommonService>().GridRecordFiles.GetFileAsStream(_fileName))
{
int totalSize = (int)fileStream.Length;
/*here we are saying read bytes from file as long as total size of file
is greater then 0*/
_response.Content.Headers.Add("Content-Length", fileStream.Length.ToString());
// _response.Content.Headers.Add("Content-Range", "bytes 0-"+ totalSize.ToString()+"/"+ fileStream.Length);
while (totalSize > 0)
{
int count = totalSize > bufferSize ? bufferSize : totalSize;
//here we are reading the buffer from orginal file
int sizeOfReadedBuffer = fileStream.Read(buffer, 0, count);
//here we are writing the readed buffer to output//
await outputStream.WriteAsync(buffer, 0, sizeOfReadedBuffer);
//and finally after writing to output stream decrementing it to total size of file.
totalSize -= sizeOfReadedBuffer;
}
}
}
After I load page video start to play immediately, but I can not seek for previous (already played) seconds of video or rewind it as well in Google Chrome browser. When I try to do this, video goes back to the beggining.
But in Firefox and Edge it's working like it should be, I can go back to already played part. I don't know how to solve this issue in Google Chrome Browser
You should use HTTP partial content. As it described here:
As it turns out, looping (or any sort of seeking, for that matter) in elements on Chrome only works if the video file was served up by a server that understands partial content requests.
So there are some articles that may help you to implement it. Try these links:
HTTP 206 Partial Content In ASP.NET Web API - Video File Streaming
How to work with HTTP Range Headers in WebAPI
Here is an implementation of responding to Range requests correctly - it reads a video from a file and returns it to the browser as a stream, so it doesnt eat up your server's ram. You get the chance to decide the security you want to apply etc in code.
[HttpGet]
public HttpResponseMessage Video(string id)
{
bool rangeMode = false;
int startByte = 0;
if (Request.Headers.Range != null)
if (Request.Headers.Range.Ranges.Any())
{
rangeMode = true;
var range = Request.Headers.Range.Ranges.First();
startByte = Convert.ToInt32(range.From ?? 0);
}
var stream = new FileStream(/* FILE NAME - convert id to file somehow */, FileMode.Open, FileAccess.Read, FileShare.ReadWrite) {Position = startByte};
if (rangeMode)
{
HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.PartialContent)
{
Content = new ByteRangeStreamContent(stream, Request.Headers.Range, MediaTypeHeaderValue.Parse(fileDetails.MimeType))
};
response.Headers.AcceptRanges.Add("bytes");
return response;
}
else
{
HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(stream)
};
response.Content.Headers.ContentType = MediaTypeHeaderValue.Parse(fileDetails.MimeType);
return response;
}
}

How to cancel upload in Google Drive API?

i am developing a application for uploading file in Google drive using drive API.I am able to upload image successfully.but my problem is, when i cancel uploading it is not cancelled.I am uaing following code:
Google.Apis.Drive.v2.Data.File image = new Google.Apis.Drive.v2.Data.File();
//set the title of file
image.Title = fileName;
//set Mime Type of file
image.MimeType = "image/jpeg";
//set the stream position 0 because stream is readed already
strm.Position = 0;
//create a request for insert a file
System.Threading.CancellationTokenSource ctsUpload = new CancellationTokenSource();
FilesResource.InsertMediaUpload request = App.GoogleDriveClient.Files.Insert(image, strm, "image/jpeg");
request.ChunkSize = 512 * 1024;
request.ProgressChanged += request_ProgressChanged;
//visible the uploading progress
//upload file
request.UploadAsync(ctsUpload.Token);
Following code is written on cancel botton click event
ctsUpload.Cancel();
Based on the following answer (Google drive SDK: Cancel upload) seems google has still this issue "unresolved".
In case of need axe found a solution.

Creating a Download Accelerator

I am referring to this article to understand file downloads using C#.
Code uses traditional method to read Stream like
((bytesSize = strResponse.Read(downBuffer, 0, downBuffer.Length)) > 0
How can I divide a file to be downloaded into multiple segments, so that I can download separate segments in parallel and merge them?
using (WebClient wcDownload = new WebClient())
{
try
{
// Create a request to the file we are downloading
webRequest = (HttpWebRequest)WebRequest.Create(txtUrl.Text);
// Set default authentication for retrieving the file
webRequest.Credentials = CredentialCache.DefaultCredentials;
// Retrieve the response from the server
webResponse = (HttpWebResponse)webRequest.GetResponse();
// Ask the server for the file size and store it
Int64 fileSize = webResponse.ContentLength;
// Open the URL for download
strResponse = wcDownload.OpenRead(txtUrl.Text);
// Create a new file stream where we will be saving the data (local drive)
strLocal = new FileStream(txtPath.Text, FileMode.Create, FileAccess.Write, FileShare.None);
// It will store the current number of bytes we retrieved from the server
int bytesSize = 0;
// A buffer for storing and writing the data retrieved from the server
byte[] downBuffer = new byte[2048];
// Loop through the buffer until the buffer is empty
while ((bytesSize = strResponse.Read(downBuffer, 0, downBuffer.Length)) > 0)
{
// Write the data from the buffer to the local hard drive
strLocal.Write(downBuffer, 0, bytesSize);
// Invoke the method that updates the form's label and progress bar
this.Invoke(new UpdateProgessCallback(this.UpdateProgress), new object[] { strLocal.Length, fileSize });
}
}
you need several threads to accomplish that.
first you start the first download thread, creating a webclient and getting the file size. then you can start several new thread, which add a download range header.
you need a logic which takes care about the downloaded parts, and creates new download parts when one finished.
http://msdn.microsoft.com/de-de/library/system.net.httpwebrequest.addrange.aspx
I noticed that the WebClient implementation has sometimes a strange behaviour, so I still recommend implementing an own HTTP client if you really want to write a "big" download program.
ps: thanks to user svick

File upload progress bar in ASP.NET 2.0

I have a file upload on my .NET website. I used the FileUpload .NET control for the upload, and set the target of the form to a hidden iframe. I use client-side script to submit the form, then display a progress bar on the page, which does an AJAX request to an ashx file on the same server and updates the progress bar every second.
Here's the part of the code for when the file is uploading -
string fileString = StringGenerator.GenerateString(8);
System.IO.Stream fileStream = System.IO.File.Create(fileRoot+ fileString + ".mp3");
System.IO.Stream inputStream = fileUpload.PostedFile.InputStream;
byte[] buffer = new byte[256];
int read;
if(StaticProgress.Progress.ContainsKey(trackId)) StaticProgress.Progress[trackId] = 0d;
else StaticProgress.Progress.Add(trackId, 0d);
int totalRead = 0;
long length = inputStream.Length;
while ((read = inputStream.Read(buffer, 0, buffer.Length)) > 0)
{
totalRead += read;
fileStream.Write(buffer, 0, read);
StaticProgress.Progress[trackId] = (double)totalRead / (double)length;
}
StaticProgress.Progress is a property which returns a static Dictionary.
Here is the code in the ashx file that gets the progress of the
context.Response.ContentType = "text/javascript";
int id;
if(!int.TryParse(context.Request.QueryString["id"], out id) || !StaticProgress.Progress.ContainsKey(id))
context.Response.Write("{\"error\":\"ID not found\"}");
else
context.Response.Write("{\"id\":\"" + id + "\",\"decimal\":" + StaticProgress.Progress[id] + "}");
Here I always "ID not found" while the file is uploading, but when the file has finished uploading, I get a successful - decimal : 1.
I've tried using session variables before this and got the same result. I've tried using a public static (thread safe?) Dictionary, but I get the same result. Has anyone came across anything like this before? Does the code not execute until the client request has been fully received?
The standard file upload control provides no progress abilities - it does not receive the file until the file has been fully received by the server. By the time you open the stream on the .PostedFile, the file has been completely uploaded.
There are a number of better solutions to uploading discussed here.

Categories