Expiry of url after file upload using Azure Blob Storage? - c#

I've been researching and I haven't found the maximum time limit that a url can be accessed after file upload using Azure Blob Storage. The url that will be generated will be accessed by anonymous users and I wanted to know what is the maximum time that anonymous users can access it?

The url that will be generated will be accessed by anonymous users and
I wanted to know what is the maximum time that anonymous users can
access it?
As such there's no maximum time limit imposed by Azure on the expiry of a SAS URL. You can set it to 9999-12-31T23:59:59Z so that it never expires.
However it is not recommended. You should always issue SAS URLs that are short lived so that they can't be misused.
You can find more information about the best practices for SAS here: https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview#best-practices-when-using-sas.

Related

Set the expiry date of a blob or auto delete the blob after X number of days in Azure Storage using C#?

I would like to automatically delete the blob in Azure Storage after certain days via code in C#. There are two options to do this:
Write your own timer trigger apps, iterate over the blobs whose last modified date is over X number of days and delete the blob.
Use Lifecycle management option available in Azure portal where we can add our rules to delete the blob in simple way
Is there any other way or any property available where we could implement the same functionality (set the expiry time/date while uploading the blob in azure storage) through the code in C#?
Any leads would be helpful.
You can set to delete a blob after X days using Storage Lifecycle Management:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts?tabs=azure-portal

How to get the size of Azure Blob Container without blob list iteration?

I'm using Azure Blob storage .Net client library v11.
My azure blob container has more than 20 million files.
Listing blob and getting size of each blob is too much time consuming operation.
Looking for method to directly get the size of blob container.
No, there is no such api/sdk. Please keep eyes on this github issue.
If you prefer using code, you must calculate them one by one. Here is an exmaple.
Otherwise, you should check it via azure portal UI. There is an "Calculate size" button under each container -> Properties.
Sign in the azure portal.
In the Azure portal, select Storage accounts.
From the list, choose a storage account.
In the Monitoring section, choose Insights (preview).

Azure Blob Storage large file upload

I am in the process of developing an application that will run on Azure and requires a user to upload very large .tiff files. We are going to use blob storage to store the files. I have been reviewing several websites to determine the correct approach to handling this situation and this link provides a good example but if I am using angular js on the frontend to grab and chunk them to the sas locator and upload via javascript, http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-shared-access-signature-part-2/. My main confusion centers around the expiration time I should give to SAS for the user to perform the upload. I would like a distinct SAS to be created each time a user performs a file upload and for it to go away once the file is uploaded. I also would like to allow other site users to view the files as well. What is the best approach for handling these two scenarios? Also, there are examples on how to generate SAS Locator for the container and for the blob, if I need tp add a new blob to a container, which makes more sense?
It sounds like you may want to use a stored access policy on the container. This allows you to modify the expiry time for the SAS after the SAS has been created. Take a look at http://azure.microsoft.com/en-us/documentation/articles/storage-manage-access-to-resources/#use-a-stored-access-policy.
With the stored access policy, you could create a SAS with a longer expiry time and with write permissions only for the upload operation, and then have your application check when the file has finished uploading, and change the expiry time to revoke the SAS.
You can create a separate SAS and stored access policy with read permissions only, for users who are viewing the files on the site. In this case you can provide a long expiry time and change it or revoke it only if you need to.

Disable Azure Blob direct download

Is it possible to restrict/disable Azure Blobs direct download (when you just enter link in the browser) ?
If you provide access to blob either through public or even with SAS, you can not blob them to DOWNLOAD the blob if they want.
If you will provide a link which includes a blob from public container as below, then any one can download the blob as long as they know the URL:
http[s]://your_azure_storage_name.blob.core.windows.net/public_container/blob_name
However if you provide a link to your blog using SAS signature API then you can restrict the access to blob to restricted users, however they still can download the blob once they have access to the URL.
Just don't make the container public and folks won't be able to download the blobs anonymously. If you require some type of ACL, you'll need to create a service that performs the ACL check then returns the stream for the blob to approved users.

Sync Files from Azure Blob to Local

I like to write a process in Worker role, to download (sync) batch of files under a folder(directory) to local mirrored folder(directory)
Is there a timestamp(or a way to get) on the time of last folder(directory) updated?
Since folder(directory) structure unsure, but simply put is download whatever there to local, as soon as it changes. Except recursion and setup a timer to check it repeatedly, whats another smart idea do you have?
(edit) p.s. I found many solutions on sync files from local to Azure storage, but the same principle on local files cannot apply on Azure blob, I am still looking for a way that most easily to download(sync) files to local as soon as they are changed.
Eric, I believe the concept you're trying to implement isn't really that effective for your core requirement, if I understand it correctly.
Consider the following scenario:
Keep your views in the blob storage.
Implement Azure (AppFabric) Cache.
Store any view file to the cache, if it's not yet there on a web request with unlimited(or a very long) expiration time.
Enable local cache on your web role instances with a short expiration time (e.g. 5 minutes)
Create a (single, separated) worker role, outside your web roles, which scans your blobs' ETags for changes in interval. Reset the view's cache key for any blob changed
Get rid of those ugly "workers" inside of your web roles :-)
There're a few things to think about in this scenario:
Your updated views will get to the web role instances within "local cache expiration time + worker scan interval". The lower the values, the more distributed cache requests and blob storage transactions.
The Azure AppFabric Cache is the only Azure service preventing the whole platform to be truly scalable. You have to choose the best cache plan based on the overall size (in MB) of your views, the number of your instances and the number of simultaneous cache requests required per instance.
consider caching of the compiled views inside your instances (not in the AppFabric cache). Reset this local cache based on the dedicated AppFabric cache key/keys. This will raise the performance greatly for you, as rendering the output html will be as easy as injecting the model to the pre-compiled views.
of course, the cache-retrieval code in your web roles must be able to retrieve the view from the primary source (storage), if it is unable to retrieve it from the cache for whatever reason.
My suggestion is to create an abstraction on top of the blob storage, so that no one is directly writing to the blob. Then submit a message to Azure's Queue service when a new file is written. Have the file receiver poll that queue for changes. No need to scan the entire blob store recursively.
As far as the abstraction goes, use an Azure web role or worker role to authenticate and authorize your clients. Have it write to the Blob store(s). You can implement the abstraction using HTTPHandlers or WCF to directly handle the IO requests.
This abstraction will allow you to overcome the blob limitation of 5000 files you mention in the comments above, and will allow you scale out and provide additional features to your customers.
I'd be interested in seeing your code when you have a chance. Perhaps I can give you some more tips or code fixes.

Categories