Is it possible to restrict/disable Azure Blobs direct download (when you just enter link in the browser) ?
If you provide access to blob either through public or even with SAS, you can not blob them to DOWNLOAD the blob if they want.
If you will provide a link which includes a blob from public container as below, then any one can download the blob as long as they know the URL:
http[s]://your_azure_storage_name.blob.core.windows.net/public_container/blob_name
However if you provide a link to your blog using SAS signature API then you can restrict the access to blob to restricted users, however they still can download the blob once they have access to the URL.
Just don't make the container public and folks won't be able to download the blobs anonymously. If you require some type of ACL, you'll need to create a service that performs the ACL check then returns the stream for the blob to approved users.
Related
I'm using Azure Blob storage .Net client library v11.
My azure blob container has more than 20 million files.
Listing blob and getting size of each blob is too much time consuming operation.
Looking for method to directly get the size of blob container.
No, there is no such api/sdk. Please keep eyes on this github issue.
If you prefer using code, you must calculate them one by one. Here is an exmaple.
Otherwise, you should check it via azure portal UI. There is an "Calculate size" button under each container -> Properties.
Sign in the azure portal.
In the Azure portal, select Storage accounts.
From the list, choose a storage account.
In the Monitoring section, choose Insights (preview).
I've been researching and I haven't found the maximum time limit that a url can be accessed after file upload using Azure Blob Storage. The url that will be generated will be accessed by anonymous users and I wanted to know what is the maximum time that anonymous users can access it?
The url that will be generated will be accessed by anonymous users and
I wanted to know what is the maximum time that anonymous users can
access it?
As such there's no maximum time limit imposed by Azure on the expiry of a SAS URL. You can set it to 9999-12-31T23:59:59Z so that it never expires.
However it is not recommended. You should always issue SAS URLs that are short lived so that they can't be misused.
You can find more information about the best practices for SAS here: https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview#best-practices-when-using-sas.
Is there a alternative for the Microsoft Azure C# API. I want to download files from blob urls but a microsoft azure storage account is not for free, so I cannot use it. So is there any other API or way to download blobs?
Example for Blob-Url: blob:https://flex.aniflex.org/afbf9776-76ea-47dc-9951-2fadafc3adff
Caution: I'm not the hoster of the file. So I don't want to download the file from my own storage account.
#kaskorian I guess you're referring to browsers file blobs...Blob URL/Object URL is a pseudo protocol to allow Blob and File objects to be used as URL source for things like images, download links for binary data and so forth.
Blob URLs can only be generated internally by the browser. URL.createObjectURL() will create a special reference to the Blob or File object which later can be released using URL.revokeObjectURL(). These URLs can only be used locally in the single instance of the browser and in the same session (ie. the life of the page/document).
For example, you can not hand an Image object raw byte-data as it would not know what to do with it. It requires for example images (which are binary data) to be loaded via URLs. This applies to anything that require an URL as source. Instead of uploading the binary data, then serve it back via an URL it is better to use an extra local step to be able to access the data directly without going via a server.
As the title says, I'm looking for a single shared access signature to access all the containers present in a storage account.
Currently I have to get shared access signature for each container separately to create separate EXTERNAL DATA SOURCE for each container which I'm trying to avoid
Is this possible ?
Yes, it is entirely possible to do so. What you will need to do is get Account Shared Access Signature (Account SAS).
Depending on the permissions granted in Account SAS, it will be applicable to all blob containers (and blobs) in that storage account.
You can learn more about Account SAS here: https://learn.microsoft.com/en-us/rest/api/storageservices/delegating-access-with-a-shared-access-signature.
From the link mentioned above:
An account-level SAS, introduced with version 2015-04-05. The account
SAS delegates access to resources in one or more of the storage
services. All of the operations available via a service SAS are also
available via an account SAS. Additionally, with the account SAS, you
can delegate access to operations that apply to a given service, such
as Get/Set Service Properties and Get Service Stats. You can also
delegate access to read, write, and delete operations on blob
containers, tables, queues, and file shares that are not permitted
with a service SAS. See Constructing an Account SAS for more
information about account SAS.
I am in the process of developing an application that will run on Azure and requires a user to upload very large .tiff files. We are going to use blob storage to store the files. I have been reviewing several websites to determine the correct approach to handling this situation and this link provides a good example but if I am using angular js on the frontend to grab and chunk them to the sas locator and upload via javascript, http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-shared-access-signature-part-2/. My main confusion centers around the expiration time I should give to SAS for the user to perform the upload. I would like a distinct SAS to be created each time a user performs a file upload and for it to go away once the file is uploaded. I also would like to allow other site users to view the files as well. What is the best approach for handling these two scenarios? Also, there are examples on how to generate SAS Locator for the container and for the blob, if I need tp add a new blob to a container, which makes more sense?
It sounds like you may want to use a stored access policy on the container. This allows you to modify the expiry time for the SAS after the SAS has been created. Take a look at http://azure.microsoft.com/en-us/documentation/articles/storage-manage-access-to-resources/#use-a-stored-access-policy.
With the stored access policy, you could create a SAS with a longer expiry time and with write permissions only for the upload operation, and then have your application check when the file has finished uploading, and change the expiry time to revoke the SAS.
You can create a separate SAS and stored access policy with read permissions only, for users who are viewing the files on the site. In this case you can provide a long expiry time and change it or revoke it only if you need to.