Google Cloud Storage api (c#) - cache header metadata - c#

I upload to google cloud storage bucket via the storage c# api (Google.Cloud.Storage.V1). These are public files accessed by client pages.
Problem:
the files are sent with "private, max-age= 0".
Question:
I would like to set custom cache headers instead while or after uploading the files via the api itself. Is this possible to sent the cache header or other metadata via the c# google storage api call?
I am also curious: since I have not set any cache header, why does google storage serve these files with max-age=0, instead of not sending any cache header at all?

You can set the cache control when you call UploadObject, if you specify an Object instead of just the bucket name and object name. Here's an example:
var client = StorageClient.Create();
var obj = new Google.Apis.Storage.v1.Data.Object
{
Bucket = bucketId,
Name = objectName,
CacheControl = "public,max-age=3600"
};
var stream = new MemoryStream(Encoding.UTF8.GetBytes("Hello world"));
client.UploadObject(obj, stream);
You can do it after the fact as well using PatchObject:
var patch = new Google.Apis.Storage.v1.Data.Object
{
Bucket = bucketId,
Name = objectName,
CacheControl = "public,max-age=7200"
};
client.PatchObject(patch);
I don't know about the details of cache control if you haven't specified anything though, I'm afraid.

Related

Images stored in Azure Blob don't cache in Browser

I have set the cache-property to "max-age=3600, must-revalidate" when uploading. Looking at the browser developer tool, the network tab, I can see that the header is correct and the cache property is sett.
But still the Photos get fetched from the Azure blob storage every time with 200OK response.
The way I upload is as follows:
-Photo uploaded by user
-GUID added to name of photo and saved to Azure SQL database with user info.
-Blob Reference created using Azure library in C#
-Cache properties set and saved after uploading
var uniqueFileName = Guid.NewGuid().ToString() + "_" + fileName;
var newBlob = container.GetBlockBlobReference(uniqueFileName);
using var fileStream = file.OpenReadStream();
newBlob.UploadFromStreamAsync(fileStream).Wait();
newBlob.Properties.CacheControl = "max-age=3600, must-revalidate";
newBlob.SetPropertiesAsync().Wait();
Blob is fetch by using an URI + SAS Token
I get the name from the SQL database, Look it up in azure Blob then getting the URI and Adding The SAS to give the Client access to the Blob.
var blob = container.GetBlockBlobReference(fileName);
var sasToken = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1)
});
var blobUrl = blob.Uri.AbsoluteUri + sasToken;
I found that every time a new SAS is created the image is treated as new version. And I do create a new SAS for every request to create a link that the client use to download image. How can I bypass this behavior. Should I make the container accessible by public for read with no SAS used?
The cache control should work if using blob with SAS token.
If you're using a browser(like Chrome) to visit the blob with SAS token, please make sure you DON'T select the checkbox of "Disable cache" in Develop tool. I can see it uses cache at my side, the screenshot as below:

ImageResizer - Need to remove images that are in the image cache

I use the ImageResizer tool with the DiskCache plugin. We use Azure blob storage to store images and a custom plugin to serve those images within the resizer code. Something went awry and some of the blobs have been deleted, but are cached in the DiskCache in the resizer.
I need to be able to build the hash key to be able to identify the images in the cache. I tried building the key from what I can see in the code, but the string returned does not yield a file in the cache
var vp = ResolveAppRelativeAssumeAppRelative(virtualPath);
var qs = PathUtils.BuildQueryString(queryString).Replace("&red_dot=true", "");
var blob = new Blob(this, virtualPath, queryString);
var modified = blob.ModifiedDateUTC;
var cachekey = string.Format("{0}{1}|{2}", vp, qs, blob.GetModifiedDateUTCAsync().Result.Ticks.ToString(NumberFormatInfo.InvariantInfo));
var relativePath = new UrlHasher().hash(cachekey, 4096, "/");
How can I query the cache to see if the images are still cached and then delete them if they do not exist in the blob storage account?
Note: I have tried to use the AzureReader2 plugin and it doesn't work for us at the moment.
Custom plugins are responsible for controlling access to cached files.
If you want to see where an active request is being cached, check out HttpContext.Current.Items["FinalCachedFile"] during the EndRequest phase of the request. You could do this with an event handler.

Downloading from Blob Azure redirects wrong

When I press download and call this action, I get the result
The resource you are looking for has been removed, had its name
changed, or is temporarily unavailable.
and directed to
http://integratedproject20170322032906.azurewebsites.net/MyDocumentUps/Download/2020Resume.pdf
Why does it link to above and not to
https://filestorageideagen.blob.core.windows.net/documentuploader/2020Resume.pdf
as shown in the controller?
Here is my actionlink in the view
#Html.ActionLink("Download", "Download", "MyDocumentUps", new { id =
item.DocumentId.ToString() + item.RevisionId.ToString() +
item.Attachment }, new { target="_blank"}) |
public ActionResult Download(string id)
{
string path = #"https://filestorageideagen.blob.core.windows.net/documentuploader/";
return View(path + id);
}
I can think of 2 ways by which you can force download a file in the client browser.
Return aFileResultorFileStreamResultfrom your controller. Here's an example of doing so: How can I present a file for download from an MVC controller?. Please note that this will download the file first on your server and then stream the contents to the client browser from there. For smaller files/low load this approach may work but as your site grows or the files to be downloaded becomes bigger in size, it will create more stress on your web server.
Use a Shared Access Signature (SAS) for the blob with Content-Disposition response header set. In this approach you will simply create a SAS token for the blob to be downloaded and using that create a SAS URL. Your controller will simply return a RedirectResult with this URL. The advantage with this approach is that all the downloads are happening directly from Azure Storage and not through your server.
When creating SAS, please ensure that
You have at least Read permission in the SAS.
Content-Disposition header is overridden in the SAS.
The expiry of SAS should be sufficient for the file to download.
Here's the sample code to create a Shared Access Signature.
var sasToken = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1),
}, new SharedAccessBlobHeaders()
{
ContentDisposition = "attachment;filename=" + blob.Name
});
var blobUrl = string.Format("{0}{1}", blob.Uri.AbsoluteUri, sasToken);
return Redirect(blobUrl);
P.S. While I was answering the question, the question got edited so the answer may seem a bit out of whack :) :).
Given that you have the image url as part of your Model and blob has no restrictions:
<img src=#Model.YourImageUri>
var image = new BitmapImage(new Uri("https://your_storage_account_name.blob.core.windows.net/your_container/your_image.jpg"));

Getting Access Denied Exception when deleting a file in Amazon S3 using the .Net AWSSDK

I am trying to do some simple file IO using amazon S3 and C#.
So far I have been able to create files and list them. I am the bucket owner and I should have full access. In CloudBerry I can create and delete files in the bucket. In my code when I try to delete a file I get an access denied exception.
This is my test method:
[Test]
public void TestThatFilesCanBeCreatedAndDeleted()
{
const string testFile = "test.txt";
var awsS3Helper = new AwsS3Helper();
awsS3Helper.AddFileToBucketRoot(testFile);
var testList = awsS3Helper.ListItemsInBucketRoot();
Assert.True(testList.ContainsKey(testFile)); // This test passes
awsS3Helper.DeleteFileFromBucket(testFile); // Access denied exception here
testList = awsS3Helper.ListItemsInBucketRoot();
Assert.False(testList.ContainsKey(testFile));
}
My method to add a file:
var request = new PutObjectRequest();
request.WithBucketName(bucketName);
request.WithKey(fileName);
request.WithContentBody("");
S3Response response = client.PutObject(request);
response.Dispose();
My method to delete a file:
var request = new DeleteObjectRequest()
{
BucketName = bucketName,
Key = fileKey
};
S3Response response = client.DeleteObject(request);
response.Dispose();
After running the code the file is visible in CloudBerry and I can delete it from there.
I have very little experience with Amazon S3 so I don't know what could be going wrong. Should I be putting some kind of permissions on to any files I create or upload? Why would I be able to delete a file while I am logged in to CloudBerry with the same credentials provided to my program?
I'm not sure what is the source of problem. Possibly security rules, but maybe something very simple with your bucket configuration. You can check them using S3 Organizer Firefox plugin, using AWS management site or any other management tool. Also I recommend request-responce logging - that helped a lot in different investigating for me. AWSSDK has plenty of good exmamples with logging - so you need only copy-paste them and everything works. If you have actual requests sending to Amazon, you can compare them with documentation. Please check AccessKeyId for your deleteRequest

Using S3 Storage with .NET

I am using AWS.Net to upload user content (images) and display them on my site. This is what my code looks like currently for the upload:
using (client = Amazon.AWSClientFactory.CreateAmazonS3Client())
{
var putObjectRequest = new PutObjectRequest
{
BucketName = bucketName,
InputStream = fileStream,
Key = fileName,
CannedACL = S3CannedACL.PublicRead,
//MD5Digest = md5Base64,
//GenerateMD5Digest = true,
Timeout = 3600000 //1 Hour
};
S3Response response = client.PutObject(putObjectRequest);
response.Dispose();
}
Whats the best way to store the path to these files? Is there a way to get a link to my file from the response?
Currently I just have a URL in my webconfig like https://s3.amazonaws.com/<MyBucketName>/ and then when I need to show an image I take that string and use the key from the object I store in the db that represents the file uploaded.
Is there a better way to do this?
All the examples that come with it don't really address this kind of usage. And the documentation isn't online and I don't know how to get to it after I install the SDK, despite following the directions on Amazon.
Your approach of storing paths including your bucket names in web.config is the same thing that I do, which works great. I then just store the relative paths in various database tables.
The nice thing about this approach is that it makes it easier to migrate to different storage mechanisms or CDNs such as CloudFront. I don't think that there's a better way than this approach because S3 files reside on a different domain, or subdomain if you do CNAME mapping, and thus your .NET runtime does not run under the same domain or subdomain.
There is also a "Location" property in the response which points directly to the uri where the S3Object is.

Categories