Azure SAS download blob - c#

I'm trying to download a blob with SAS and kinda clueless right now.
I'm listing all the belonging user blobs in a view. When a user clicks on the blob its supposed to start downloading it.
Here is the view:
#foreach (var file in Model)
{
<a href='#Url.Action("GetSaSForBlob", "Folder", new { blob = file })>
</a>
}
Here is my two functions located in "Folder" controller.
public void GetSaSForBlob(CloudBlockBlob blob)
{
var sas = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
SharedAccessStartTime = DateTime.UtcNow.AddMinutes(-5),
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(3),
Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.Write,
});
DownloadFileTest(string.Format(CultureInfo.InvariantCulture, "{0}{1}", blob.Uri, sas));
//return string.Format(CultureInfo.InvariantCulture, "{0}{1}", blob.Uri, sas);
}
static void DownloadFileTest(string blobSasUri)
{
CloudBlockBlob blob = new CloudBlockBlob(new Uri(blobSasUri));
using (MemoryStream ms = new MemoryStream())
{
blob.DownloadToStream(ms);
byte[] data = new byte[ms.Length];
ms.Position = 0;
ms.Read(data, 0, data.Length);
}
}
What should i be passing from my view to GetSasForBlob? At the moment CloudBlockBlob blob is null.
Am i missing any code in function DownloadFileTest?
Should i be calling DownloadFileTest directly from GetSasForBlob?
How can i protect these two functions so people cant access them outside the view? They are both static functions now. I'm guessing that is not safe?

1, What the value is of your file in your view. I don't think MVC can create CloudBlockBlob object based on the file you provided. So this might be the reason you got null of your CloudBlockBlob.
2, In your DownloadFileTest you just download the binaries of the blob into the memory stream in your server and that's all. If you need to let user download it to local disk you need to put the binaries into Response.Stream. You can just use something like blob.DownloadToStram(Response.Stream).
3, That's up to you. You can merge them in the same method if you want.
4, If you want user to download blob through your web front (website or web service), as what you are doing now, you need to set your blob container as private and secure your website or web service by using something loke [Authorize] attribute. Basically in your case you really don't need to use SAS at all because all download requests are performed through your web front.
Hope this helps a bit.

Related

Azure Functions: Is there optimal way to upload file without storing it in process memory?

I'm trying to upload file to blob container via HTTP.
On request receiving file through:
public class UploadFileFunction
{
//Own created wrapper on BlobContainerClient
private readonly IBlobFileStorageClient _blobFileStorageClient;
public UploadFileFunction(IBlobFileStorageClient blobFileStorageClient)
{
_blobFileStorageClient = blobFileStorageClient;
}
[FunctionName("UploadFile")]
public async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "/equipments/{route}")]
HttpRequest request,
[FromRoute] string route)
{
IFormFile file = request.Form.Files["File"];
if (file is null)
{
return new BadRequestResult();
}
string fileNamePath = $"{route}_{request.Query["fileName"]}_{request.Query["fileType"]}";
BlobClient blob = _blobFileStorageClient.Container.GetBlobClient(fileNamePath);
try
{
await blob.UploadAsync(file.OpenReadStream(), new BlobHttpHeaders { ContentType = file.ContentType });
}
catch (Exception)
{
return new ConflictResult();
}
return new OkResult();
}
}
Than making request with file:
On UploadAsync whole stream of the file is uploaded in process memory
Is exists some way to upload directly to blob without uploading in process memory?
Thank you in advance.
The best way to avoid this is to not upload your file via your own HTTP endpoint at all. Asking how to avoid the uploaded data not ending up in the process memory (via an HTTP endpoint) makes no sense.
Simply use the Azure Blob Storage REST API to directly upload this file to the Azure blob storage. Your own HTTP endpoint simply needs to issue a Shared access signature (SAS) token for a file upload and the client can upload the file directly to the Blob storage.
This pattern should be used for file uploads unless you have a very good reason not to. Your trigger function is only called after the HTTPRunTime is finished with the HTTP request, hence the trigger's HttpRequest object is allocated in the process memory which is then passed to the trigger.
I also suggest block blobs if you want to upload in multiple stages.
Thats the default way UploadAsync works, this will be ok for files that are small. I ran into an out of memory issue with large files; the solution here is to use AppendBlobAsync
You will need to create the blob as an append blob, so you can keep appending to end of the blob. Basic gist is:
Create an append blob
Go through the existing file and grab xMB(say 2 MB) chunks at a time
Append these chunks to the append blob until the end of file
pseudo code something like below
var appendBlobClient = _blobFileStorageClient.GetAppendBlobClient(fileNamePath);
await appendBlobClient.CreateIfNotExistsAsync();
var appendBlobMaxAppendBlockBytes = appendBlobClient.AppendBlobMaxAppendBlockBytes;
using (var file = file.OpenReadStream())
{
int bytesRead;
var buffer = new byte[appendBlobMaxAppendBlockBytes];
while ((bytesRead = file.Read(buffer, 0, buffer.Length)) > 0)
{
//Stream stream = new MemoryStream(buffer);
var newArray = new Span<byte>(buffer, 0, bytesRead).ToArray();
Stream stream = new MemoryStream(newArray);
stream.Position = 0;
appendBlobClient.AppendBlock(stream);
}
}

trying to download the word document from azure blob

i am trying to download the word document stored in azure blob container having private access and i need to convert downloaded document into byte array so that i can be able to send to react app
this is the code i am trying below
[Authorize, HttpGet("{id}/{projectphase?}")]
public async Task<ActionResult<DesignProject>> GetDesignProject(string id, string projectphase = null)
{
var blobContainerName = Startup.Configuration["AzureStorage:BlobContainerName"];
var azureStorageConnectionString = Startup.Configuration["AzureStorage:ConnectionString"];
BlobContainerClient blobContainerClient = new BlobContainerClient(azureStorageConnectionString, blobContainerName);
blobContainerClient.CreateIfNotExists();
....... // not sure how to proceed further
.......
......
return new InlineFileContentResult('here i need to return byte array???', "application/docx") { FileDownloadName = fileName };
}
I have got the full path name where the file has been stored like as below
https://xxxx.blob.core.windows.net/design-project-files/99999-99/99999-99-BOD-Concept.docx
and then i have got the file name as well 99999-99-BOD-Concept.docx
Could any one please guide me how to proceed with the next to download the document that would be very grateful to me.
Please try something like the following (untested code though):
public async Task<ActionResult<DesignProject>> GetDesignProject(string id, string projectphase = null)
{
var blobContainerName = Startup.Configuration["AzureStorage:BlobContainerName"];
var azureStorageConnectionString = Startup.Configuration["AzureStorage:ConnectionString"];
BlobContainerClient blobContainerClient = new BlobContainerClient(azureStorageConnectionString, blobContainerName);
blobContainerClient.CreateIfNotExists();
var blobClient = new BlobClient("https://xxxx.blob.core.windows.net/design-project-files/99999-99/99999-99-BOD-Concept.docx");
var blobName = blobClient.Name;
blobClient = new BlobClient(azureStorageConnectionString, blobContainerName, blobName);
using (var ms = new MemoryStream())
{
await blobClient.DownloadToAsync(ms);
return new InlineFileContentResult(ms.ToArray(), "application/docx") { FileDownloadName = fileName };
}
}
Basically what we're doing is that we're first creating a BlobClient using the URL that you have so that we can extract blob's name out of that URL (you can do URL parsing as well). Once we have the blob's name, we create a new instance of BlobClient using connection string, blob container name and blob's name.
Then we download the blob's content as stream and convert that stream to byte array (this part I am not 100% sure that my code would work) and return that byte array.
You don't really need to have this process where your react app requests to your server, so your server downloads the file and then sends it to the react app; that file in blob storage is on the web, downloadable from blob storage so it's kinda unnecessary to hassle your sevrer into being a proxy for it
If you configure public access for blobs then you just put that URL into your react app - user clicks it, bytes download. Happy days. If you have a private container you can still generate SAS URLs for the blobs
If you actually need the bytes in your react app, then just fetch it with a javascript web request - you'll need to set a CORS policy on the blob container though
If you really want to download the file to/via the server, you'll probably have to get into streaming it to the response stream connected to the react app, passed into the SOMETHING below:
BlobClient blob = blobContainerClient.GetBlobClient( BLOB NAME I.E PATH INSIDE CONTAINER);
//download to a file or stream
await blob.DownloadToAsync( SOMETHING );

azure blob download via SAS

All,
I have a blob container with nested filenames (simulating a folder).
Is it possible to download via code (bypassing the webserver) through SAS URI for a bunch of files beginning with a prefix?
Right now, i am zipping these files and sending it to a stream below..
CloudBlobContainer container = GetRootContainer();
CloudBlockBlob caseBlob = container.GetBlockBlobReference(folderPrefix);
await caseBlob.DownloadToStreamAsync(zipStream);
This works and i can download the set of files beginning with that prefix to a client machine. However this is dependent on the webserver's speed and its comparatively slow.
Is there an example on how to download using SAS by providing a URI for the folder? Here is an example i found from another post in stackoverflow
var sasConstraints = new SharedAccessBlobPolicy();
sasConstraints.SharedAccessStartTime = DateTime.UtcNow.AddMinutes(-5);
sasConstraints.SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(10);
sasConstraints.Permissions = SharedAccessBlobPermissions.Read;
var sasBlobToken = blob.GetSharedAccessSignature(sasConstraints);
return blob.Uri + sasBlobToken;
//Here can the URI point to the prefix??
Can i use something like this?
Thanks!
It is certainly doable however it won't be a single step process.
First, you can't have a SAS Token for a virtual folder. You will need to get a SAS token for the blob container with both List (for listing blobs) and Read (for downloading blob) permissions.
Next you will list the blobs in the virtual folder inside the blob container. In order to do so, you will need to specify the virtual folder path as the prefix. This will give you a list of blobs inside that virtual folder. Please ensure that you specify empty string as delimiter so that all blobs inside that virtual folders are listed.
Once you have the list of blobs, you will need to read (download) the blobs and store the blob content somewhere in the browser.
After you have read the blob contents, you could use a JS based zip library to dynamically create zip file contents and once all the blobs are added to the zip file, you can force download of that zip file. A quick search for JS based zip library landed me here: https://stuk.github.io/jszip/. When I implemented this functionality in a product I built, I used ZipJS library but unfortunately I am not able to find it online though now.
Bits and pieces of the code I wrote many years ago (it was part of a much larger application + there was no JS Storage SDK when I wrote it so apologies if the code does not much sense to you. Please use it for general guidance only).
zip.workerScriptsPath = '/Scripts/ZipJS/';
zipWriter = new zip.BlobWriter();
zip.createWriter(zipWriter, function (zipWriter) {
startZippingFiles(zipWriter);
}, function () {
}, true);
function startZippingFiles(writer) {
if (downloadedContent.length > 0) {//downloadedContent is an array containing downloaded blobs
var downloadedContentItem = downloadedContent.shift();//read first item
var cloudBlob = downloadedContentItem.Blob;//get the cloud blob object
var blobContents = downloadedContentItem.Content;//get the blob's content
var status = downloadedContentItem.Status;//status to track blob's download status
if (status === 'Completed') {
writer.add(cloudBlob.name,
new zip.BlobReader(new Blob([blobContents], { type: cloudBlob.properties.contentType })), function () {
console.log(cloudBlob.name + ' added to zip...')
downloadedBlobsCount += 1;
startZippingFiles(writer);
}, function (o) {
console.log('Adding ' + cloudBlob.name + ' to zip file. ' + parseFloat((o * 100) / cloudBlob.size).toFixed(2) + '% done...');
});
}
} else {
writer.close(function (blob) {//Finally save the zipped data as download.zip
saveAs(blob, "download.zip");
zipWriter = null;
});
console.log("Download successful!");
}
}

Downloading from Azure Blob storage in C#

I have a very basic but working Azure Blob uploader/downloader built on C# ASP.net.
Except the download portion does not work. This block is called by the webpage and I simply get no response. The uploads are a mixture of images and raw files. I'm looking for the user to get prompted to select a destination and just have the file download to their machine. Can anyone see where I am going wrong?
[HttpPost]
public void DownloadFile(string Name)
{
Uri uri = new Uri(Name);
string filename = System.IO.Path.GetFileName(uri.LocalPath);
CloudBlobContainer blobContainer = _blobStorageService.GetCloudBlobContainer();
CloudBlockBlob blob = blobContainer.GetBlockBlobReference(filename);
using (Stream outputFile = new FileStream("Downloaded.jpg", FileMode.Create))
{
blob.DownloadToStream(outputFile);

Azure Storage Blob Rename

Is is possible to rename an Azure Storage Blob using the Azure Storage API from a Web Role? The only solution I have at the moment is to copy the blob to a new blob with the correct name and delete the old one.
UPDATE:
I updated the code after #IsaacAbrahams comments and #Viggity's answer, this version should prevent you from having to load everything into a MemoryStream, and waits until the copy is completed before deleting the source blob.
For anyone getting late to the party but stumbling on this post using Azure Storage API V2, here's an
extension method to do it quick and dirty (+ async version):
public static class BlobContainerExtensions
{
public static void Rename(this CloudBlobContainer container, string oldName, string newName)
{
//Warning: this Wait() is bad practice and can cause deadlock issues when used from ASP.NET applications
RenameAsync(container, oldName, newName).Wait();
}
public static async Task RenameAsync(this CloudBlobContainer container, string oldName, string newName)
{
var source = await container.GetBlobReferenceFromServerAsync(oldName);
var target = container.GetBlockBlobReference(newName);
await target.StartCopyFromBlobAsync(source.Uri);
while (target.CopyState.Status == CopyStatus.Pending)
await Task.Delay(100);
if (target.CopyState.Status != CopyStatus.Success)
throw new Exception("Rename failed: " + target.CopyState.Status);
await source.DeleteAsync();
}
}
Update for Azure Storage 7.0
public static async Task RenameAsync(this CloudBlobContainer container, string oldName, string newName)
{
CloudBlockBlob source =(CloudBlockBlob)await container.GetBlobReferenceFromServerAsync(oldName);
CloudBlockBlob target = container.GetBlockBlobReference(newName);
await target.StartCopyAsync(source);
while (target.CopyState.Status == CopyStatus.Pending)
await Task.Delay(100);
if (target.CopyState.Status != CopyStatus.Success)
throw new Exception("Rename failed: " + target.CopyState.Status);
await source.DeleteAsync();
}
Disclaimer: This is a quick and dirty method to make the rename execute in a synchronous way. It fits my purposes, however as other users noted, copying can take a long time (up to days), so the best way is NOT to perform this in 1 method like this answer but instead:
Start the copy process
Poll the status of the copy operation
Delete the original blob when the copy is completed.
There is practical way to do so, although Azure Blob Service API does not directly support ability to rename or move blobs.
You can, however, copy and then delete.
I originally used code from #Zidad, and in low load circumstances it usually worked (I'm almost always renaming small files, ~10kb).
DO NOT StartCopyFromBlob then Delete!!!!!!!!!!!!!!
In a high load scenario, I LOST ~20% of the files I was renaming (thousands of files). As mentioned in the comments on his answer, StartCopyFromBlob just starts the copy. There is no way for you to wait for the copy to finish.
The only way for you to guarantee the copy finishes is to download it and re-upload. Here is my updated code:
public void Rename(string containerName, string oldFilename, string newFilename)
{
var oldBlob = GetBlobReference(containerName, oldFilename);
var newBlob = GetBlobReference(containerName, newFilename);
using (var stream = new MemoryStream())
{
oldBlob.DownloadToStream(stream);
stream.Seek(0, SeekOrigin.Begin);
newBlob.UploadFromStream(stream);
//copy metadata here if you need it too
oldBlob.Delete();
}
}
While this is an old post, perhaps this excellent blog post will show others how to very quickly rename blobs that have been uploaded.
Here are the highlights:
//set the azure container
string blobContainer = "myContainer";
//azure connection string
string dataCenterSettingKey = string.Format("DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}", "xxxx",
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx");
//setup the container object
CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse(dataCenterSettingKey);
CloudBlobClient blobClient = cloudStorageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference(blobContainer);
// Set permissions on the container.
BlobContainerPermissions permissions = new BlobContainerPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Blob;
container.SetPermissions(permissions);
//grab the blob
CloudBlob existBlob = container.GetBlobReference("myBlobName");
CloudBlob newBlob = container.GetBlobReference("myNewBlobName");
//create a new blob
newBlob.CopyFromBlob(existBlob);
//delete the old
existBlob.Delete();
Copy the blob, then delete it.
Tested for files of 1G size, and it works OK.
For more information, see the sample on MSDN.
StorageCredentials cred = new StorageCredentials("[Your?storage?account?name]", "[Your?storage?account?key]");
CloudBlobContainer container = new CloudBlobContainer(new Uri("http://[Your?storage?account?name].blob.core.windows.net/[Your container name] /"), cred);
string fileName = "OldFileName";
string newFileName = "NewFileName";
await container.CreateIfNotExistsAsync();
CloudBlockBlob blobCopy = container.GetBlockBlobReference(newFileName);
if (!await blobCopy.ExistsAsync())
{
CloudBlockBlob blob = container.GetBlockBlobReference(fileName);
if (await blob.ExistsAsync())
{
// copy
await blobCopy.StartCopyAsync(blob);
// then delete
await blob.DeleteIfExistsAsync();
}
}
Renaming is not possible. Here is a workaround using Azure SDK for .NET v12:
BlobClient sourceBlob = container.GetBlobClient(sourceBlobName);
BlobClient destBlob = container.GetBlobClient(destBlobName);
CopyFromUriOperation ops = await destBlob.StartCopyFromUriAsync(sourceBlob.Uri);
long copiedContentLength = 0;
while (ops.HasCompleted == false)
{
copiedContentLength = await ops.WaitForCompletionAsync();
await Task.Delay(100);
}
await sourceBlob.DeleteAsync();
You can now with the new release in public preview of ADLS Gen 2 ( Azure Data Lake Storage Gen 2)
The Hierarchical Namespace capability allows you to perform atomic manipulation of directories and files which includes Rename operation.
However, make note of the following:
"With the preview release, if you enable the hierarchical namespace, there is no interoperability of data or operations between Blob and Data Lake Storage Gen2 REST APIs. This functionality will be added during preview."
You will need to make sure you create the blobs (files ) using ADLS Gen 2 to rename them. Otherwise, wait for the interoperability between Blob APIs and ADLS Gen 2 to be added during the preview time period.
Using Monza Cloud's Azure Explorer, I can rename an 18 Gigabyte blob in under a second. Microsoft's Azure Storage Explorer takes 29 sec to clone that same blob, so Monza is not
doing a copy. I know it is fast because immediately after the Monza rename, clicking the container in Microsoft Azure Storage Explorer shows the blob with the new name.
The only way at the mement is to move the src blob to a new destination/name. Here is my code to do this
public async Task<CloudBlockBlob> RenameAsync(CloudBlockBlob srcBlob, CloudBlobContainer destContainer,string name)
{
CloudBlockBlob destBlob;
if (srcBlob == null && srcBlob.Exists())
{
throw new Exception("Source blob cannot be null and should exist.");
}
if (!destContainer.Exists())
{
throw new Exception("Destination container does not exist.");
}
//Copy source blob to destination container
destBlob = destContainer.GetBlockBlobReference(name);
await destBlob.StartCopyAsync(srcBlob);
//remove source blob after copy is done.
srcBlob.Delete();
return destBlob;
}
Here is a code sample if you want the blob lookup as part of the method:
public CloudBlockBlob RenameBlob(string oldName, string newName, CloudBlobContainer container)
{
if (!container.Exists())
{
throw new Exception("Destination container does not exist.");
}
//Get blob reference
CloudBlockBlob sourceBlob = container.GetBlockBlobReference(oldName);
if (sourceBlob == null && sourceBlob.Exists())
{
throw new Exception("Source blob cannot be null and should exist.");
}
// Get blob reference to which the new blob must be copied
CloudBlockBlob destBlob = container.GetBlockBlobReference(newName);
destBlob.StartCopyAsync(sourceBlob);
//Delete source blob
sourceBlob.Delete();
return destBlob;
}
There is also a way without copying your blob to rename it, and without running any script: mounting Azure Blob storage to your OS: https://learn.microsoft.com/bs-latn-ba/azure/storage/blobs/storage-how-to-mount-container-linux
Then you can just use mv and your blob will be renamed instantly.
Using Azure Storage Explorer is the easiest way to manually rename a blob. You can download it here https://azure.microsoft.com/en-us/features/storage-explorer/#overview
If you set the ContentDisposition property with
attachment;filename="yourfile.txt"
The name of the download over http will be whatever you want.
I think Storage was built with the assumption that data would be stored in a way with unique identifiers primarily used as the filenames. Issuing Shared Access Signatures for all downloads is a bit weird though, so this isn't ideal for some people.
But I think abstracting away the user-facing filename is overall a good practice and encourages a more stable architecture overall.
This worked for me in live environment of 100K Users having file sizes no more than 100 mb. This is similar synchronous approach to #viggity's answer. But the difference is that its copying everything on Azure side so you don't have to hold Memorystream on your server for Copy/Upload to new Blob.
var account = new CloudStorageAccount(new Microsoft.WindowsAzure.Storage.Auth.StorageCredentials(StorageAccountName, StorageAccountKey), true);
CloudBlobClient blobStorage = account.CreateCloudBlobClient();
CloudBlobContainer container = blobStorage.GetContainerReference("myBlobContainer");
string fileName = "OldFileName";
string newFileName = "NewFileName";
CloudBlockBlob oldBlob = container.GetBlockBlobReference(fileName);
CloudBlockBlob newBlob = container.GetBlockBlobReference(newFileName);
using (var stream = new MemoryStream())
{
newBlob.StartCopyFromBlob(oldBlob);
do { } while (!newBlob.Exists());
oldBlob.Delete();
}

Categories