I know this question can be interpreted as a duplicate, but I can simply not get the blop service working. I have followed the standard example on msdn. I have implemented in my code but followed the example. I can get my MobileService, with the supplied script in the example, to insert a blob with open properties. I then use this code to upload an image to the blob storage:
BitmapImage bi = new BitmapImage();
MemoryStream stream = new MemoryStream();
if (bi != null)
{
WriteableBitmap bmp = new WriteableBitmap((BitmapSource)bi);
bmp.SaveJpeg(stream, bmp.PixelWidth, bmp.PixelHeight, 0, 100);
}
if (!string.IsNullOrEmpty(uploadImage.SasQueryString))
{
// Get the URI generated that contains the SAS
// and extract the storage credentials.
StorageCredentials cred = new StorageCredentials(uploadImage.SasQueryString);
var imageUri = new Uri(uploadImage.ImageUri);
// Instantiate a Blob store container based on the info in the returned item.
CloudBlobContainer container = new CloudBlobContainer(
new Uri(string.Format("https://{0}/{1}",
imageUri.Host, uploadImage.ContainerName)), cred);
// Upload the new image as a BLOB from the stream.
CloudBlockBlob blobFromSASCredential = container.GetBlockBlobReference(uploadImage.ResourceName);
await blobFromSASCredential.UploadFromStreamAsync(stream);//error!
// When you request an SAS at the container-level instead of the blob-level,
// you are able to upload multiple streams using the same container credentials.
stream = null;
}
I am getting an error in this code at the point marked error, with the following error:
+ ex {Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: NotFound. ---> System.Net.WebException: The remote server returned an error: NotFound. ---> System.Net.WebException: The remote server returned an error: NotFound.
Which I do not understand since the code that returns the string from the script is:
// Generate the upload URL with SAS for the new image.
var sasQueryUrl = blobService.generateSharedAccessSignature(item.containerName,
item.resourceName, sharedAccessPolicy);
// Set the query string.
item.sasQueryString = qs.stringify(sasQueryUrl.queryString);
// Set the full path on the new new item,
// which is used for data binding on the client.
item.imageUri = sasQueryUrl.baseUrl + sasQueryUrl.path;
Of course this also depicts that I do not completely grasp the construction of the blob storage. And therefore any help would be appreciated.
Comment elaborations
From the server code it should create a public note for at least 5 minutes. And therefore not be an issue. My server script is the same as the link. But replicated here:
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
function insert(item, user, request) {
// Get storage account settings from app settings.
var accountName = appSettings.STORAGE_ACCOUNT_NAME;
var accountKey = appSettings.STORAGE_ACCOUNT_ACCESS_KEY;
var host = accountName + '.blob.core.windows.net';
if ((typeof item.containerName !== "undefined") && (
item.containerName !== null)) {
// Set the BLOB store container name on the item, which must be lowercase.
item.containerName = item.containerName.toLowerCase();
// If it does not already exist, create the container
// with public read access for blobs.
var blobService = azure.createBlobService(accountName, accountKey, host);
blobService.createContainerIfNotExists(item.containerName, {
publicAccessLevel: 'blob'
}, function(error) {
if (!error) {
// Provide write access to the container for the next 5 mins.
var sharedAccessPolicy = {
AccessPolicy: {
Permissions: azure.Constants.BlobConstants.SharedAccessPermissions.WRITE,
Expiry: new Date(new Date().getTime() + 5 * 60 * 1000)
}
};
// Generate the upload URL with SAS for the new image.
var sasQueryUrl =
blobService.generateSharedAccessSignature(item.containerName,
item.resourceName, sharedAccessPolicy);
// Set the query string.
item.sasQueryString = qs.stringify(sasQueryUrl.queryString);
// Set the full path on the new new item,
// which is used for data binding on the client.
item.imageUri = sasQueryUrl.baseUrl + sasQueryUrl.path;
} else {
console.error(error);
}
request.execute();
});
} else {
request.execute();
}
}
The idea with the pictures is that other users of the app should be able to access them. As far as I understand I have made it public, but only write public for 5 minutes. The url for the blob I save in a mobileservice table, where the user needs to be authenticated, I would like the same safety on the storage. But do not know if this is accomplished? I am sorry for all the stupid questions, but I have not been able to solve it on my own so I have to "seem" stupid :)
If someone ends up in here needing help. The problem for me was the uri. It should have been http and not https. Then there were no error uploading.
But displaying the image even on a test image control from the toolbox, did not succeed. The problem was I had to set the stream to the begining:
stream.Seek(0, SeekOrigin.Begin);
Then the upload worked and was able to retrieve the data.
Related
In my application made with asp.net core, I have a page that allow user to add & remove 3 images for their product.
When the user add an image, I upload it to Azure Blob Storage with a specific name ( from 3 variables ).
But if the user delete and add one of this image, the image still the same because of cache and because I keep the same name. I don't want to change the file name to avoid managing the deletion in Azure Blob Storage.
Here the code of the upload method :
public async Task<string> UploadFileToCloudAsync(string containerName, string name, Stream file)
{
var cloudBlobContainerClient =
await StorageHelper.ConnectToBlobStorageAsync(_blobStorageEndPoint, containerName);
if (cloudBlobContainerClient == null)
{
throw new NullReferenceException($"Unable to connect to blob storage {name}");
}
BlobClient blobClient = cloudBlobContainerClient.GetBlobClient(name);
await blobClient.UploadAsync(file, true);
var headers = await CreateHeadersIfNeededAsync(blobClient);
if (headers != null)
{
// Set the blob's properties.
await blobClient.SetHttpHeadersAsync(headers);
}
return blobClient.Uri.AbsoluteUri;
}
```
and here the method to create the headers :
```` private readonly string _defaultCacheControl = "max-age=3600, must-revalidate";
private async Task<BlobHttpHeaders> CreateHeadersIfNeededAsync(BlobClient blobClient)
{
BlobProperties properties = await blobClient.GetPropertiesAsync();
BlobHttpHeaders headers = null;
if (properties.CacheControl != _defaultCacheControl)
{
headers = new BlobHttpHeaders
{
// Set the MIME ContentType every time the properties
// are updated or the field will be cleared
ContentType = properties.ContentType,
ContentLanguage = properties.ContentLanguage,
CacheControl = _defaultCacheControl,
ContentDisposition = properties.ContentDisposition,
ContentEncoding = properties.ContentEncoding,
ContentHash = properties.ContentHash
};
}
return headers;
}
```
I need to know if is possible to force cache to be invalidate ? I think with the cache invalidation, the correct image should be displayed after an add / delete / add for the same image.
Thanks in advance
If you want to reuse the same name for different content, you'd need to prevent caching using Cache-Control: no-cache. Specifying must-revalidate only causes validation after the validity period.
It's worth noting that this is bad for clients and servers because you lose all caching. Much better to use unique names and design for immutability of assets.
I am having below method to copy data to destination storage blob
private static async Task MoveMatchingBlobsAsync(IEnumerable<ICloudBlob> sourceBlobRefs,
CloudBlobContainer sourceContainer,
CloudBlobContainer destContainer)
{
foreach (ICloudBlob sourceBlobRef in sourceBlobRefs)
{
if (sourceBlobRef.Properties.ContentType != null)
{
// Copy the source blob
CloudBlockBlob destBlob = destContainer.GetBlockBlobReference(sourceBlobRef.Name);
try
{
//exception throwed here - StartCopyAsync
await destBlob.StartCopyAsync(new Uri(GetSharedAccessUri(sourceBlobRef.Name, sourceContainer))); /
ICloudBlob destBlobRef = await destContainer.GetBlobReferenceFromServerAsync(sourceBlobRef.Name);
while (destBlobRef.CopyState.Status == CopyStatus.Pending)
{
Console.WriteLine($"Blob: {destBlobRef.Name}, Copied: {destBlobRef.CopyState.BytesCopied ?? 0} of {destBlobRef.CopyState.TotalBytes ?? 0}");
await Task.Delay(500);
destBlobRef = await destContainer.GetBlobReferenceFromServerAsync(sourceBlobRef.Name);
}
Console.WriteLine($"Blob: {destBlob.Name} Complete");
}
catch (Exception e)
{
Console.WriteLine($"Blob: {destBlob.Name} Copy Failed");
}
}
}
}
I am getting below exception, there is no more information
The requested operation is not allowed in the current state of the entity
What may be the cause?
Here is my method to collect blob from the source location
private static async Task<IEnumerable<ICloudBlob>> FindMatchingBlobsAsync(CloudBlobContainer blobContainer,string prefix, int maxrecords,int total)
{
List<ICloudBlob> blobList = new List<ICloudBlob>();
BlobContinuationToken token = null;
do
{
BlobResultSegment segment = await blobContainer.ListBlobsSegmentedAsync(prefix: prefix, useFlatBlobListing: true, BlobListingDetails.None, maxrecords, token, new BlobRequestOptions(), new OperationContext());
token = segment.ContinuationToken;
foreach (var item in segment.Results)
{
blobList.Add((ICloudBlob)item);
if (blobList.Count > total) // total record count is configured
token = null;
}
} while ( token != null);
return blobList;
}
Here is my GetSharedAccessUri method which returns Uri without any issue
private static string GetSharedAccessUri(string blobName, CloudBlobContainer container)
{
DateTime toDateTime = DateTime.Now.AddMinutes(60);
SharedAccessBlobPolicy policy = new SharedAccessBlobPolicy
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessStartTime = null,
SharedAccessExpiryTime = new DateTimeOffset(toDateTime)
};
CloudBlockBlob blob = container.GetBlockBlobReference(blobName);
string sas = blob.GetSharedAccessSignature(policy);
return blob.Uri.AbsoluteUri + sas;
}
This will iterate only 2 levels but not dynamically till the inner levels. I have blob in below hierarchy
--Container
--FolderA
--FolderAA
--FolderAA1
--File1.txt
--File2.txt
--FolderAA2
--File1.txt
--File2.txt
--FolderAA3
--FolderAB
--File8.txt
--FolderAC
--File9.txt
This hierarchy is dynamic
Additional Question: Is there any GUI tool to copy blob data to target storage account?
UPDATE
According to your description, I modified it in the official sample code. It is already possible to completely copy the data in one container to another account, and the code has been uploaded to Github.
To use this sample code, you need to modify the App.Config file. Formal use to the production environment needs to be perfected.
https://github.com/Jason446620/BlobContainerCopy
PRIVIOUS
You can refer to the code in this post for copy operation. If the solution in this post does not help you, please let me know and I will continue to follow up to help you solve the problem.
And u can download Azure Storage Explorer is the GUI tool to copy datas.
I am trying to invalidate CloudFront objects in C#/.NET and gettign the following exception:
Your request contains one or more invalid invalidation paths.
My Function:
public bool InvalidateFiles(string[] arrayofpaths)
{
for (int i = 0; i < arrayofpaths.Length; i++)
{
arrayofpaths[i] = Uri.EscapeUriString(arrayofpaths[i]);
}
try
{
Amazon.CloudFront.AmazonCloudFrontClient oClient = new Amazon.CloudFront.AmazonCloudFrontClient(MY_AWS_ACCESS_KEY_ID, MY_AWS_SECRET_KEY, Amazon.RegionEndpoint.USEast1);
CreateInvalidationRequest oRequest = new CreateInvalidationRequest();
oRequest.DistributionId = ConfigurationManager.AppSettings["CloudFrontDistributionId"];
oRequest.InvalidationBatch = new InvalidationBatch
{
CallerReference = DateTime.Now.Ticks.ToString(),
Paths = new Paths
{
Items = arrayofpaths.ToList<string>(),
Quantity = arrayofpaths.Length
}
};
CreateInvalidationResponse oResponse = oClient.CreateInvalidation(oRequest);
oClient.Dispose();
}
catch
{
return false;
}
return true;
}
The array passed to the function contains a single Url like so:
images/temp_image.jpg
The image exists in the S3 bucket and loaded in the browser in the CloudFront URL.
What am I doing wrong?
You invalidation file paths need a / at the front of the string.
If you are in doubt, you can log onto AWS Management, go to Cloudfront, select the distribution you are trying to invalidate files from, select Distribution setting and go to the Invalidations tab.
You can then create validations manually, which allows you to check that your paths are correct.
When you send invalidation request to some object in CloudFront, you still can see your picture in the browser in the CloudFront URL even when invalidation completed, because invalidation does not delete object from S3 bucket and with new request to this image from you browser CloudFront again cached these URl to images/temp_image.jpg in edge locations.
Invalidation of object will be seen, when you update image with the same name.
Your Invalidation function is correct.
Have you tried adding the forward slash at the beginning of the path? (/images/temp_image.jpg)
Using: Drive v2: 1.5.0.99 Beta, .NET Framework: 4.5
The authentication takes place properly (using impersonation) - via service account (AssertionFlowClient).
Access token is obtained. Service account has been granted domain wide privileges
I am able to get the parent folder - ID (strRootFolder) via Service.Files.List();
byte[] byteArray = System.IO.File.ReadAllBytes(FileName);
Google.Apis.Drive.v2.Data.File flUpload = new Google.Apis.Drive.v2.Data.File();
flUpload.Title = Title;
flUpload.Description = Description;
flUpload.MimeType = MimeType;
flUpload.Parents = new List<ParentReference>() { new ParentReference() { Id = strRootFolder } };
Google.Apis.Drive.v2.FilesResource.InsertMediaUpload drvRequest = drvService.Files.Insert(flUpload, new System.IO.MemoryStream(byteArray), "text/plain");
drvRequest.Upload();
However Upload method does not send any request. No exception is thrown. Fiddler trace shows no request has been sent and hence request.responsebody is always null.
Am I missing something ?
If some exception occur during the upload, the return object (IUploadProgress) should contain the exception (take a look at the Exception property).
Please check what is the exception.
You should also consider using UploadAsync which doesn't block your code (but first you should understand what is the exception)
You should look into Exception from your upload, that will give you a better idea of the actual problem.
Sample code:
var progress = request.Upload();
if (progress.Exception != null)
{
//Log execption, or break here to debug
YourLoggingProvider.Log(progress.Exception.Message.ToString());
}
I am trying to upload a simple text file to a specific folder in google documents but with no luck.
FileStream fileStream = new FileStream(#"c:\test.txt", System.IO.FileMode.Open);
DocumentEntry lastUploadEntry =
globalData.service.UploadDocument("c:\\test.txt", null);
string feed =
"https://docs.google.com/feeds/upload/create-session/default/private/full/folder%folder:0B2dzFB6YvN-kYTRlNmNhYjEtMTVmNC00ZThkLThiMjQtMzFhZmMzOGE2ZWU1/contents/";
var result =
globalData.service.Insert(new Uri(feed), fileStream, "application/pdf", "test");
I get an error saying
"The remote server returned an error: (503) Server Unavailable."
I am suspecting that the destination folders uri is wrong but i can't figure out the correct one.
There's a complete sample at https://developers.google.com/google-apps/documents-list/#uploading_a_new_document_or_file_with_both_metadata_and_content that uses the resumable upload component:
using System;
using Google.GData.Client;
using Google.GData.Client.ResumableUpload;
using Google.GData.Documents;
namespace MyDocumentsListIntegration
{
class Program
{
static void Main(string[] args)
{
DocumentsService service = new DocumentsService("MyDocumentsListIntegration-v1");
// TODO: Instantiate an Authenticator object according to your authentication
// mechanism (e.g. OAuth2Authenticator).
// Authenticator authenticator = ...
// Instantiate a DocumentEntry object to be inserted.
DocumentEntry entry = new DocumentEntry();
// Set the document title
entry.Title.Text = "Legal Contract";
// Set the media source
entry.MediaSource = new MediaFileSource("c:\\contract.txt", "text/plain");
// Define the resumable upload link
Uri createUploadUrl = new Uri("https://docs.google.com/feeds/upload/create-session/default/private/full");
AtomLink link = new AtomLink(createUploadUrl.AbsoluteUri);
link.Rel = ResumableUploader.CreateMediaRelation;
entry.Links.Add(link);
// Set the service to be used to parse the returned entry
entry.Service = service;
// Instantiate the ResumableUploader component.
ResumableUploader uploader = new ResumableUploader();
// Set the handlers for the completion and progress events
uploader.AsyncOperationCompleted += new AsyncOperationCompletedEventHandler(OnDone);
uploader.AsyncOperationProgress += new AsyncOperationProgressEventHandler(OnProgress);
// Start the upload process
uploader.InsertAsync(authenticator, entry, new object());
}
static void OnDone(object sender, AsyncOperationCompletedEventArgs e) {
DocumentEntry entry = e.Entry as DocumentEntry;
}
static void OnProgress(object sender, AsyncOperationProgressEventArgs e) {
int percentage = e.ProgressPercentage;
}
}
}
Just follow the article Google Apps Platform Uploading documents
Also check out Google Documents List API version 3.0
Uri should be something similar to below:
string feed = #"https://developers.google.com/google-apps/documents-list/#getting_a_resource_entry_again";
//it may not be exact, just check and read from the links
Try this uri:
"https://docs.google.com/feeds/default/private/full/folder%3A" + fRid + "/contents"
//fRid is the Resource Id of the folder.. in your case: 0B2dzFB6YvN-kYTRlNmNhYjEtMTVmNC00ZThkLThiMjQtMzFhZmMzOGE2ZWU1
Also I guess your URI is giving this error because you are using folder resource ID as - folder:resourceID
Try removing folder: and use only RID
Code to cutout "folder:" -
int ridIndex = dRid.IndexOf(":");
Rid = Rid.Substring(ridIndex + 1);