Azure blob storage security options in MVC - c#

I am working on a solution where a small number of authenticated users should have full access to a set of Azure Blob Storage containers. I have currently implemented a system with public access, and wonder if I need to complicate the system further, or if this system would be sufficiently secure. I have looked briefly into how Shared Access Signatures (SAS) works, but I am not sure if this really is necessary, and therefore ask for your insight. The goal is to allow only authenticated users to have full access to the blob containers and their content.
The current system sets permissions in the following manner (C#, MVC):
// Retrieve a reference to my image container
myContainer = blobClient.GetContainerReference("myimagescontainer");
// Create the container if it doesn't already exist
if (myContainer.CreateIfNotExists())
{
// Configure container for public access
var permissions = myContainer.GetPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Container;
myContainer.SetPermissions(permissions);
}
As a result, all blobs are fully accessible as long as you have the complete URL, but it does not seem to be possible to list the blobs in the container directly through the URL:
// This URL allows you to view one single image directly:
'https://mystorageaccount.blob.core.windows.net/mycontainer/mycontainer/image_ea644f08-3263-4a7f-9be7-bc42efbf8939.jpg'
// These URLs appear to return to nothing but an error page:
'https://mystorageaccount.blob.core.windows.net/mycontainer/mycontainer/'
'https://mystorageaccount.blob.core.windows.net/mycontainer/'
'https://mystorageaccount.blob.core.windows.net/'
I do not find it an issue that authenticated users share complete URLs, allowing public access to a single image; however, no one but the authenticated users should be able to list, browse or access the containers directly to retrieve other images.
My question then becomes whether I should secure the system further, for instance using SAS, when it right now appears to work as intended, or leave the system as-is. You might understand that I would like to not complicate the system if not strictly needed. Thanks!
The solution I ended up using has been given below :)

I use Ognyan Dimitrov's "Approach 2" to serve small PDFs stored in a private blob container ("No public read access") inside a browser window like this:
public ActionResult ShowPdf()
{
string fileName = "fileName.pdf";
var storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
var blobClient = storageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("containerName");
var blockBlob = container.GetBlockBlobReference(fileName);
Response.AppendHeader("Content-Disposition", "inline; filename=" + fileName);
return File(blockBlob.DownloadByteArray(), "application/pdf");
}
with config file
<configuration>
<appSettings>
<add key="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=account-name;AccountKey=account-key" />
</appSettings>
</configuration>
...which works perfect for me!

So, here is what I ended up doing. Thanks to Neil and Ognyan for getting me there.
It works as following:
All images are private, and cannot be viewed at all without having a valid SAS
Adding, deletion and modification of blobs are made within the controller itself, all privately. No SAS or additional procedures are needed for these tasks.
When an image is to be displayed to the user (either anonymously or authenticated), a function generates an SAS with a fast expiry is that merely allows the browser to download the image (or blob), upon page generation and refresh, but not copy/paste a useful URL to the outside.
I first explicitly set the container permissions to Private (this is also the default setting, according to Ognyan):
// Connect to storage account
...
// Retrieve reference to a container.
myContainer= blobClient.GetContainerReference("mycontainer");
// Create the container if it doesn't already exist.
if (myContainer.CreateIfNotExists())
{
// Explicitly configure container for private access
var permissions = myContainer.GetPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Off;
myContainer.SetPermissions(permissions);
}
Then later, when wanting to display the image, I added an SAS string to the original storage path of the blob:
public string GetBlobPathWithSas(string myBlobName)
{
// Get container reference
...
// Get the blob, in my case an image
CloudBlockBlob blob = myContainer.GetBlockBlobReference(myBlobName);
// Generate a Shared Access Signature that expires after 1 minute, with Read and List access
// (A shorter expiry might be feasible for small files, while larger files might need a
// longer access period)
string sas = myContainer.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(1),
Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.List
});
return (blob.Uri.ToString() + sas).ToString();
}
I then called the GetBlobPathWithSas()-function from within the razor view, so that each page refresh would give a valid path+sas for displaying the image:
<img src="#GetPathWithSas("myImage")" />
In general, I found this reference useful:
http://msdn.microsoft.com/en-us/library/ee758387.aspx
Hope that helps someone!

If you want only your auth. users to have access you have to make the container private. Otherwise it will be public and it is only a matter of time that somebody else gets to the "almost private" content and you as a developer get embarrassed.
Approach 1 : You send a link to your authorized user.
In this case you give a SAS link to the user and he downloads his content from the blob directly.
You have to generate SAS signatures with short access window so that your users can get your content and download it/ open it and after they are gone from the site the link will expire and the content will be no longer available. This is in case that they accidentally send the link over the wire and somebody else gets to the private content later.
Approach 2 : Your web server gets the content and delivers it to your clients
In this case only your web app will have the access and no SAS signatures have to be generated. You return FileContentResult ( in case of MVC ) and you are ready. The downside is that your web server have to download the file prior to giving it to the client - double traffic. Here you have to handle the Blob->Web download carefully because if 3 users try to download a 200 MB file in together and you are storing it in your RAM - it will be depleted.
** UPDATE **
#Intexx provided an updated link to the docs you need.

If you are using a public container then you are not really restricting access to authenticated users.
If the spec said "only authenticated users should have access" then I personally would find using a public container to be unacceptable. SAS is not very hard - the libraries do most of the work.
BTW: the format to list the items in a container is: https://myaccount.blob.core.windows.net/mycontainer?restype=container&comp=list

Related

Problems generating a SAS token in C# for Blob paths with special characters

We are implementing a file store in our application and we store all the files in private containers in Azure Blob Storage. We have a virtual folder system which we replicate in our Blob storage.
For example, Let's say i work for Company A, and i upload file_1.txt to Folder #1, it will reside in /vault/Company A/Folder #1/file_1.txt in the Blob storage.
We generate SAS tokens using the following code:
public static Uri GetServiceSasUriForCloudBlockBlob(CloudBlockBlob cloudBlockBlob, string permissions = "r")
{
var sasBuilder = new SharedAccessBlobPolicy()
{
SharedAccessStartTime = DateTimeOffset.UtcNow.AddMinutes(-5),
SharedAccessExpiryTime = DateTimeOffset.UtcNow.AddMinutes(5),
Permissions = SharedAccessBlobPolicy.PermissionsFromString(permissions)
};
var sasUri = cloudBlockBlob.GetSharedAccessSignature(sasBuilder);
return new Uri(cloudBlockBlob.Uri + sasUri);
}
However, this does not work. The error we get is:
<Error>
<script type="text/javascript"/>
<Code>AuthenticationFailed</Code>
<Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. RequestId:f82118d1-101e-002a-1381-97ac16000000 Time:2022-07-14T12:55:34.6370028Z</Message>
<AuthenticationErrorDetail>Signature did not match. String to sign used was r 2022-07-14T12:50:27Z 2022-07-14T13:00:27Z /blob/[blobname]/vault/Company A/Folder #1/file_1.txt 2019-07-07 b </AuthenticationErrorDetail>
</Error>
When generating a SAS token from the Azure Portal or the Azure Storage Explorer there is no problem
It seems to be an issue with the special characters in the path to the file in the Blob. So we tried escaping all spaces and special characters manually to fix this issue, however when doing this the CloudBlockBlob encodes it again (e.g.: it escapes My%20File.txt to My%2520File.txt).
Currently the only operation we use is Read on Objects, but this may be expanded in the future.
We could disallow spaces and special character in folders/files but this doesn't feel like solving the issue but working around it. How can we fix this without implementing naming policies?
EDIT: Turns out this was a design issue, and while the SDK docs never explicitly disencourages the use unescaped blob paths it does disallow container names with anything other than alphanumerics and dashes.
For anyone having the same issue (whether on SDK version 11 or 12), i can highly recommend not using spaces/special characters without encoding them part by part,
var fileName = "file.txt"
// Note that the order here matters
var folderNames = ["Folder #1", "Folder #1.1"]
// becomes: Folder+%25231/Folder+%25231.1
var encodedPath = folderNames.Select(WebUtility.UrlEncode).Aggregate((x, y) => x + "/" + y);
// becomes: Folder+%25231/Folder+%25231.1/file.txt
var blobPath = ${encodedPath}/{fileName}"
This looks worse in Azure Storage Explorer but this does circumvent issues with encoding string programmatically

Realm sync permissions for flexibly-named partitions based on user id

I'm new to Realm Sync (and Realm). I'm trying to convert a REST / SQL Server system to Realm Sync (to avoid having to write my own local-device caching code).
I got a simple configuration working, with a single API-key user and the null partition, read and write permissions just set to true.
But for my more complex application, I want smaller sub-partitions to reduce the amount of data that needs to be cached on local devices, and I want the sub-partitions to be able to be created dynamically by the client. Ideally, I would like to allow an API-key user to connect to any partition whose name starts with their user id (or some other known string, e.g. the profile name). But I can't find a way to get a "starts with" condition into the permissions.
My best attempt was to try setting Read and Write sync permissions to:
{
"%%partition": {
"$regex": "^%%user.id"
}
}
but my client just fails to connect, saying Permission denied (BIND, REFRESH). (Yes, I tried using "$regex": /^%%user.id/ but the Realm UI rejected that syntax.) The Realm Sync log says "user does not have permission to sync on partition (ProtocolErrorCode=206)".
As you can see in the log image, the partition name was equal to the user id for this test.
Is what I'm trying to do possible? If so, how do I set up the Sync Permissions to make it work?
This can be done using a function. If, like me, you're new to Realm Sync and not fluent in Javascript, don't worry - it turns out to be not too hard to do, after all. (Thanks Jay for encouraging me to try it!)
I followed the instructions on the Define a Function page to create my userCanAccessPartition function like this:
exports = function(partition){
return partition.startsWith(context.user.id);
};
Then I set my sync permissions to:
{
"%%true": {
"%function": {
"name": "userCanAccessPartition",
"arguments": ["%%partition"]
}
}
}

ImageResizer - Need to remove images that are in the image cache

I use the ImageResizer tool with the DiskCache plugin. We use Azure blob storage to store images and a custom plugin to serve those images within the resizer code. Something went awry and some of the blobs have been deleted, but are cached in the DiskCache in the resizer.
I need to be able to build the hash key to be able to identify the images in the cache. I tried building the key from what I can see in the code, but the string returned does not yield a file in the cache
var vp = ResolveAppRelativeAssumeAppRelative(virtualPath);
var qs = PathUtils.BuildQueryString(queryString).Replace("&red_dot=true", "");
var blob = new Blob(this, virtualPath, queryString);
var modified = blob.ModifiedDateUTC;
var cachekey = string.Format("{0}{1}|{2}", vp, qs, blob.GetModifiedDateUTCAsync().Result.Ticks.ToString(NumberFormatInfo.InvariantInfo));
var relativePath = new UrlHasher().hash(cachekey, 4096, "/");
How can I query the cache to see if the images are still cached and then delete them if they do not exist in the blob storage account?
Note: I have tried to use the AzureReader2 plugin and it doesn't work for us at the moment.
Custom plugins are responsible for controlling access to cached files.
If you want to see where an active request is being cached, check out HttpContext.Current.Items["FinalCachedFile"] during the EndRequest phase of the request. You could do this with an event handler.

Downloading from Blob Azure redirects wrong

When I press download and call this action, I get the result
The resource you are looking for has been removed, had its name
changed, or is temporarily unavailable.
and directed to
http://integratedproject20170322032906.azurewebsites.net/MyDocumentUps/Download/2020Resume.pdf
Why does it link to above and not to
https://filestorageideagen.blob.core.windows.net/documentuploader/2020Resume.pdf
as shown in the controller?
Here is my actionlink in the view
#Html.ActionLink("Download", "Download", "MyDocumentUps", new { id =
item.DocumentId.ToString() + item.RevisionId.ToString() +
item.Attachment }, new { target="_blank"}) |
public ActionResult Download(string id)
{
string path = #"https://filestorageideagen.blob.core.windows.net/documentuploader/";
return View(path + id);
}
I can think of 2 ways by which you can force download a file in the client browser.
Return aFileResultorFileStreamResultfrom your controller. Here's an example of doing so: How can I present a file for download from an MVC controller?. Please note that this will download the file first on your server and then stream the contents to the client browser from there. For smaller files/low load this approach may work but as your site grows or the files to be downloaded becomes bigger in size, it will create more stress on your web server.
Use a Shared Access Signature (SAS) for the blob with Content-Disposition response header set. In this approach you will simply create a SAS token for the blob to be downloaded and using that create a SAS URL. Your controller will simply return a RedirectResult with this URL. The advantage with this approach is that all the downloads are happening directly from Azure Storage and not through your server.
When creating SAS, please ensure that
You have at least Read permission in the SAS.
Content-Disposition header is overridden in the SAS.
The expiry of SAS should be sufficient for the file to download.
Here's the sample code to create a Shared Access Signature.
var sasToken = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1),
}, new SharedAccessBlobHeaders()
{
ContentDisposition = "attachment;filename=" + blob.Name
});
var blobUrl = string.Format("{0}{1}", blob.Uri.AbsoluteUri, sasToken);
return Redirect(blobUrl);
P.S. While I was answering the question, the question got edited so the answer may seem a bit out of whack :) :).
Given that you have the image url as part of your Model and blob has no restrictions:
<img src=#Model.YourImageUri>
var image = new BitmapImage(new Uri("https://your_storage_account_name.blob.core.windows.net/your_container/your_image.jpg"));

Using S3 Storage with .NET

I am using AWS.Net to upload user content (images) and display them on my site. This is what my code looks like currently for the upload:
using (client = Amazon.AWSClientFactory.CreateAmazonS3Client())
{
var putObjectRequest = new PutObjectRequest
{
BucketName = bucketName,
InputStream = fileStream,
Key = fileName,
CannedACL = S3CannedACL.PublicRead,
//MD5Digest = md5Base64,
//GenerateMD5Digest = true,
Timeout = 3600000 //1 Hour
};
S3Response response = client.PutObject(putObjectRequest);
response.Dispose();
}
Whats the best way to store the path to these files? Is there a way to get a link to my file from the response?
Currently I just have a URL in my webconfig like https://s3.amazonaws.com/<MyBucketName>/ and then when I need to show an image I take that string and use the key from the object I store in the db that represents the file uploaded.
Is there a better way to do this?
All the examples that come with it don't really address this kind of usage. And the documentation isn't online and I don't know how to get to it after I install the SDK, despite following the directions on Amazon.
Your approach of storing paths including your bucket names in web.config is the same thing that I do, which works great. I then just store the relative paths in various database tables.
The nice thing about this approach is that it makes it easier to migrate to different storage mechanisms or CDNs such as CloudFront. I don't think that there's a better way than this approach because S3 files reside on a different domain, or subdomain if you do CNAME mapping, and thus your .NET runtime does not run under the same domain or subdomain.
There is also a "Location" property in the response which points directly to the uri where the S3Object is.

Categories