Downloading from Blob Azure redirects wrong - c#

When I press download and call this action, I get the result
The resource you are looking for has been removed, had its name
changed, or is temporarily unavailable.
and directed to
http://integratedproject20170322032906.azurewebsites.net/MyDocumentUps/Download/2020Resume.pdf
Why does it link to above and not to
https://filestorageideagen.blob.core.windows.net/documentuploader/2020Resume.pdf
as shown in the controller?
Here is my actionlink in the view
#Html.ActionLink("Download", "Download", "MyDocumentUps", new { id =
item.DocumentId.ToString() + item.RevisionId.ToString() +
item.Attachment }, new { target="_blank"}) |
public ActionResult Download(string id)
{
string path = #"https://filestorageideagen.blob.core.windows.net/documentuploader/";
return View(path + id);
}

I can think of 2 ways by which you can force download a file in the client browser.
Return aFileResultorFileStreamResultfrom your controller. Here's an example of doing so: How can I present a file for download from an MVC controller?. Please note that this will download the file first on your server and then stream the contents to the client browser from there. For smaller files/low load this approach may work but as your site grows or the files to be downloaded becomes bigger in size, it will create more stress on your web server.
Use a Shared Access Signature (SAS) for the blob with Content-Disposition response header set. In this approach you will simply create a SAS token for the blob to be downloaded and using that create a SAS URL. Your controller will simply return a RedirectResult with this URL. The advantage with this approach is that all the downloads are happening directly from Azure Storage and not through your server.
When creating SAS, please ensure that
You have at least Read permission in the SAS.
Content-Disposition header is overridden in the SAS.
The expiry of SAS should be sufficient for the file to download.
Here's the sample code to create a Shared Access Signature.
var sasToken = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1),
}, new SharedAccessBlobHeaders()
{
ContentDisposition = "attachment;filename=" + blob.Name
});
var blobUrl = string.Format("{0}{1}", blob.Uri.AbsoluteUri, sasToken);
return Redirect(blobUrl);
P.S. While I was answering the question, the question got edited so the answer may seem a bit out of whack :) :).

Given that you have the image url as part of your Model and blob has no restrictions:
<img src=#Model.YourImageUri>
var image = new BitmapImage(new Uri("https://your_storage_account_name.blob.core.windows.net/your_container/your_image.jpg"));

Related

Images stored in Azure Blob don't cache in Browser

I have set the cache-property to "max-age=3600, must-revalidate" when uploading. Looking at the browser developer tool, the network tab, I can see that the header is correct and the cache property is sett.
But still the Photos get fetched from the Azure blob storage every time with 200OK response.
The way I upload is as follows:
-Photo uploaded by user
-GUID added to name of photo and saved to Azure SQL database with user info.
-Blob Reference created using Azure library in C#
-Cache properties set and saved after uploading
var uniqueFileName = Guid.NewGuid().ToString() + "_" + fileName;
var newBlob = container.GetBlockBlobReference(uniqueFileName);
using var fileStream = file.OpenReadStream();
newBlob.UploadFromStreamAsync(fileStream).Wait();
newBlob.Properties.CacheControl = "max-age=3600, must-revalidate";
newBlob.SetPropertiesAsync().Wait();
Blob is fetch by using an URI + SAS Token
I get the name from the SQL database, Look it up in azure Blob then getting the URI and Adding The SAS to give the Client access to the Blob.
var blob = container.GetBlockBlobReference(fileName);
var sasToken = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1)
});
var blobUrl = blob.Uri.AbsoluteUri + sasToken;
I found that every time a new SAS is created the image is treated as new version. And I do create a new SAS for every request to create a link that the client use to download image. How can I bypass this behavior. Should I make the container accessible by public for read with no SAS used?
The cache control should work if using blob with SAS token.
If you're using a browser(like Chrome) to visit the blob with SAS token, please make sure you DON'T select the checkbox of "Disable cache" in Develop tool. I can see it uses cache at my side, the screenshot as below:

Google Cloud Storage api (c#) - cache header metadata

I upload to google cloud storage bucket via the storage c# api (Google.Cloud.Storage.V1). These are public files accessed by client pages.
Problem:
the files are sent with "private, max-age= 0".
Question:
I would like to set custom cache headers instead while or after uploading the files via the api itself. Is this possible to sent the cache header or other metadata via the c# google storage api call?
I am also curious: since I have not set any cache header, why does google storage serve these files with max-age=0, instead of not sending any cache header at all?
You can set the cache control when you call UploadObject, if you specify an Object instead of just the bucket name and object name. Here's an example:
var client = StorageClient.Create();
var obj = new Google.Apis.Storage.v1.Data.Object
{
Bucket = bucketId,
Name = objectName,
CacheControl = "public,max-age=3600"
};
var stream = new MemoryStream(Encoding.UTF8.GetBytes("Hello world"));
client.UploadObject(obj, stream);
You can do it after the fact as well using PatchObject:
var patch = new Google.Apis.Storage.v1.Data.Object
{
Bucket = bucketId,
Name = objectName,
CacheControl = "public,max-age=7200"
};
client.PatchObject(patch);
I don't know about the details of cache control if you haven't specified anything though, I'm afraid.

Google drive api, upload file with shared permission

Im trying to upload file to the google drive, and I want to set 'shared' permission for this file, but I don't know how to do it...
I tried to use this code, but the file is uploaded without the shared permission.
My code is:
// _drive - google drive object
Google.Apis.Drive.v2.Data.File item = new Google.Apis.Drive.v2.Data.File();
Permission permission = new Permission();
permission.Role = "reader";
permission.Type = "anyone";
permission.WithLink = true;
item.Permissions = new List<Permission>() { permission };
FilesResource.InsertMediaUpload request = _drive.Files.Insert(item, fileStream, mimeType);
request.Upload();
OK I have spent the last hour playing around with this. If you check out Files.insert documentation it doesn't really state anyplace that you should be able to set the permissions at insert time.
At the bottom if you test out try it. Setting the permissions up as you have done above under Request body.
It does upload the file. But the Json returned gives us a clue.
"shared": false,
Now if i check the file in Google drive
This leads me to believe that this is not supported by the Google Drive API. It is not possible to set the permissions at the time of upload. You are going to have to create a separate call to set the permissions after you have uploaded the file.
While it looks like the body does support the permissions it doesn't appear to be working. I am not sure if this is a bug or something that is just not supported. I am going to see if i can find the location of the issue tracker for Drive and add it as an issue.
In the mean time you are going to have to make the two calls and eat a bit of your Quota.
Issue 3717: Google drive api, upload file with shared permission
Also experienced this bug. Specifying permissions in the same file / directory upload, had to do it in a separate request, like below. The Google Drive API documentation is not clear about this (and not clear about how to handle file permissions when using a Service Account).
var NewDirRequest = DService.Files.Insert(GoogleDir);
var NewDir = NewDirRequest.Execute();
GoogleFolderID = NewDir.Id;
var NewPermissionsRequest = DService.Permissions.Insert(new Permission()
{
Kind = "drive#permission",
Value = emailAddress,
Role = "writer",
Type = "user"
}, GoogleFolderID);
DService.Permissions.Insert(new Permission()
{
Kind = "drive#permission",
Value = "mydomain.com",
Role = "reader",
Type = "domain"
}, GoogleFolderID);
NewPermissionsRequest.Execute();

Why does requesting a pre-signed URL in the Amazon SDK for a file that doesn't exist return a URL?

I ran into this little bit of weirdness today, and I haven't been able to find anything about it, so I was hoping someone here could help. I'm trying to get a pre-signed URL for an image in my S3 bucket using the AWS SDK for C# .NET. I make the request by doing the following:
string url = string.Empty;
using (s3Client = new AmazonS3Client("aws-access-key",
"aws-secret-key",
RegionEndpoint.USEast1))
{
GetPreSignedUrlRequest request1 = new GetPreSignedUrlRequest()
{
BucketName = BUCKET_NAME,
Key = "whatever.jpg",
Expires = DateTime.Now.AddMinutes(1)
};
try
{
url = s3Client.GetPreSignedURL(request1);
}
catch (AmazonS3Exception amazonS3Exception)
{
}
}
"Whatever.jpg" doesn't exist in my bucket, but it still returns with a URL. If I try going to that URL, it just tells me that the specified key does not exist. This all seems a bit weird to me. Why does it return a URL at all instead of throwing some exception?
Would it be better to check to see if the file exists first on S3 and then create the request for the pre-signed URL? Thanks for the all help in advance!
Signing URLs is a purely client-side operation (using cryptography).
There is no reason to add a network request to that.
For one thing, this allows you to sign your URLs before uploading them.

Using S3 Storage with .NET

I am using AWS.Net to upload user content (images) and display them on my site. This is what my code looks like currently for the upload:
using (client = Amazon.AWSClientFactory.CreateAmazonS3Client())
{
var putObjectRequest = new PutObjectRequest
{
BucketName = bucketName,
InputStream = fileStream,
Key = fileName,
CannedACL = S3CannedACL.PublicRead,
//MD5Digest = md5Base64,
//GenerateMD5Digest = true,
Timeout = 3600000 //1 Hour
};
S3Response response = client.PutObject(putObjectRequest);
response.Dispose();
}
Whats the best way to store the path to these files? Is there a way to get a link to my file from the response?
Currently I just have a URL in my webconfig like https://s3.amazonaws.com/<MyBucketName>/ and then when I need to show an image I take that string and use the key from the object I store in the db that represents the file uploaded.
Is there a better way to do this?
All the examples that come with it don't really address this kind of usage. And the documentation isn't online and I don't know how to get to it after I install the SDK, despite following the directions on Amazon.
Your approach of storing paths including your bucket names in web.config is the same thing that I do, which works great. I then just store the relative paths in various database tables.
The nice thing about this approach is that it makes it easier to migrate to different storage mechanisms or CDNs such as CloudFront. I don't think that there's a better way than this approach because S3 files reside on a different domain, or subdomain if you do CNAME mapping, and thus your .NET runtime does not run under the same domain or subdomain.
There is also a "Location" property in the response which points directly to the uri where the S3Object is.

Categories