I have code which inserts a blob into storage, and allows the user to view a list of the blobs, and an individual blob. However, I now can't get the blob to delete, the error that appears is
"An exception of type 'System.ServiceModel.FaultException`1' occurred in System.ServiceModel.ni.dll but was not handled in user code. Additional information: The remote server returned an error: (404) Not Found."
The code in the WCF service is
public void DeleteBlob(string guid, string uri)
{
//create the storage account with shared access key
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(accountDetails);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference(guid);
CloudBlockBlob blob = container.GetBlockBlobReference(uri);
blob.DeleteIfExists();
}
and then I access this in the mobile client application through SOAP services like:
private void mnuDelete_Click(object sender, EventArgs e)
{
MessageBoxResult message = MessageBox.Show("Are you sure you want to delete this image?", "Delete", MessageBoxButton.OKCancel);
if (message == MessageBoxResult.OK)
{
Service1Client svc = new Service1Client();
svc.DeleteBlobCompleted += new EventHandler<AsyncCompletedEventArgs>(svc_DeleteBlobCompleted);
svc.DeleteBlobAsync(container, uri);
}
}
void svc_DeleteBlobCompleted(object sender, AsyncCompletedEventArgs e)
{
if (e.Error == null) {
NavigationService.Navigate(new Uri("/Pages/albums.xaml", UriKind.Relative));
}
else {
MessageBox.Show("Unable to delete this photo at this time", "Error", MessageBoxButton.OK);
}
}
I also use SAS token to save the blob in the first place - I don't know whether this makes a difference?
In Azure Storage Client Library 4.0, we changed Get*Reference methods to accept relative addresses only. So, if you are using the latest library and the parameter "uri" is an absolute address, you should change it to either to the blob name or you should use the CloudBlockBlob constructor that takes an Uri and a StorageCredentials object.
Please see all such breaking changes in our GitHub repository.
I am using WindowsAzure.Storage (v8.1.4) in my ASP.NET Core MVC web app (v1.1.3).
I have an image crop and resize feature on my web app so I decide to use Azure Blob Storage to store the raw(user uploaded) pictures and the cropped (after resize) pictures.
One important thing to keep in mind even you're using the CloudBlockBlob constructor with the absolute uri is that you still need to pass your storage account credentials into CloudBlockBlob constructor.
public class AzureBlobStorageService : IBlobStorageService
{
private readonly AzureBlobConnectionConfigurations _azureBlobConnectionOptions;
private readonly CloudStorageAccount _storageAccount;
private readonly CloudBlobClient _blobClient;
public AzureBlobStorageService(IOptions<AzureBlobConnectionConfigurations> azureBlobConnectionAccessor)
{
_azureBlobConnectionOptions = azureBlobConnectionAccessor.Value;
_storageAccount = CloudStorageAccount.Parse(_azureBlobConnectionOptions.StorageConnectionString);
_blobClient = _storageAccount.CreateCloudBlobClient();
}
public async Task<Uri> UploadAsync(string containerName, string blobName, IFormFile image)
{
...
}
public async Task<Uri> UploadAsync(string containerName, string blobName, byte[] imageBytes)
{
...
}
public async Task<byte[]> GetBlobByUrlAsync(string url, bool deleteAfterFetch = false)
{
// This works
var blockBlob = new CloudBlockBlob(new Uri(url), _storageAccount.Credentials);
// Even this will fail
//var blockBlob = new CloudBlockBlob(new Uri(url));
await blockBlob.FetchAttributesAsync();
byte[] arr = new byte[blockBlob.Properties.Length];
await blockBlob.DownloadToByteArrayAsync(arr, 0);
if (deleteAfterFetch)
{
await blockBlob.DeleteIfExistsAsync();
}
return arr;
}
private async Task<CloudBlobContainer> CreateContainerIfNotExistAsync(string containerName)
{
var container = _blobClient.GetContainerReference(containerName);
if (!await container.ExistsAsync())
{
await container.CreateAsync();
await container.SetPermissionsAsync(new BlobContainerPermissions
{
PublicAccess = BlobContainerPublicAccessType.Blob
});
}
return container;
}
}
Hope this helps.
In Azure Storage Client Library 4.0, the get Reference method must be changed to accept relative addresses and nothing else . So, this does not support the libraries earlier than that,
you should change it to either to the blob name or you should use the CloudBlockBlob constructor that takes an Uri and a StorageCredentials object.
Related
I am getting while setting meta data in azure data lake file using azure fileclient, it's throwing the exception like "Specified value has invalid HTTP Header characters. Parameter name: name in azure data lake"
I am using the Dictionary to set metadata.
PathHttpHeaders path = new PathHttpHeaders();
path.ContentType = "application/octet-stream";
fileClient.SetHttpHeaders(path);
var metaDataProperties = await GetMetaDataProperties(entityData);
await fileClient.SetMetadataAsync(metaDataProperties);
The above issue has been resolved, the issue was coming due to the FileClient instance, I was using the same FileClient instance which I have used to store the file into the Azure data lake. For resolving the issue I have created a new FileClient Instance and that worked for me. Below is my code.
private async Task SetMetaDataProps(MessageWrapper wrapper, Uri uri, string filename)
{
try
{
var entityData = JObject.Parse(wrapper.Payload);
entityData["FileName"] = filename;
var storageCredentials = new StorageSharedKeyCredential("accountName", "accountKey");
var fileclient = new DataLakeFileClient(uri, storageCredentials);
var metaDataProps = await GetMetaDataProperties(entityData);
var ss = fileclient.SetMetadataAsync(metaDataProps);
}
catch(Exception ex)
{
throw ex;
}
}
I'm developing rest web api that uploads images to my azure blob, I used one of the online tutorial and got this code
public class DocumentsController : ApiController
{
private const string CONTAINER = "documents";
// POST api/<controller>
public async Task<HttpResponseMessage> Post()
{
var context = new StorageContext();
// Check if the request contains multipart/form-data.
if (!Request.Content.IsMimeMultipartContent())
{
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
}
// Get and create the container
var blobContainer = context.BlobClient.GetContainerReference(CONTAINER);
blobContainer.CreateIfNotExists();
string root = HttpContext.Current.Server.MapPath("~/App_Data");
var provider = new MultipartFormDataStreamProvider(root);
try
{
// Read the form data and return an async task.
await Request.Content.ReadAsMultipartAsync(provider);
// This illustrates how to get the file names for uploaded files.
foreach (var fileData in provider.FileData)
{
var filename = fileData.LocalFileName;
var blob = blobContainer.GetBlockBlobReference(filename);
using (var filestream = File.OpenRead(fileData.LocalFileName))
{
blob.UploadFromStream(filestream);
}
File.Delete(fileData.LocalFileName);
}
return Request.CreateResponse(HttpStatusCode.OK);
}
catch (System.Exception e)
{
return Request.CreateErrorResponse(HttpStatusCode.InternalServerError, e);
}
}
it is uploading the image in my account blob container but when i open azure storage manager and access the container i get wrong format as below picture describes
enter image description here
can u see the content-type ? I'm not able open this path in explorer
any help would be appreciated
Is your "filename" from code below have extension included when you debuging (like .jpg, .png)? For example "image.jpg"
var blob = blobContainer.GetBlockBlobReference(filename);
Also "fileData.LocalFileName" from code below need to have file extension
using (var filestream = File.OpenRead(fileData.LocalFileName))
It doesn't have extension and for this reason you have such issue
I'm using azure storage client to upload some files to Azure blob storage. This upload is happening from a dll file stored in local machine. following is the code i'm using.
public bool UploadBlob(byte[] fileContent, CloudStorageAccount account, string containerName, string blobName)
{
try
{
CloudBlobClient blobclient = account.CreateCloudBlobClient();
CloudBlobContainer container = blobclient.GetContainerReference(containerName);
container.CreateIfNotExist();
CloudBlockBlob blob = container.GetBlockBlobReference(blobName);
HashSet<string> blocklist = new HashSet<string>();
foreach (FileBlock block in GetFileBlocks(fileContent))
{
if (ScanTool.mIsThreadStop)
return false;
ScanTool.mDocumentUploadedSize += block.Content.Length;
blob.PutBlock(
block.Id,
new MemoryStream(block.Content, true),
null
);
blocklist.Add(block.Id);
}
blob.PutBlockList(blocklist);
blob.FetchAttributes();
return blob.Properties.Length == fileContent.Length;
}
catch (Exception e) {
Log.WriteErrorLog(e, "UploadBlob at AzureBlobUtilCS");
throw new System.Net.WebException();
}
}
I'm calling above upload method as follows and it throws "Proxy Authentication failed" Exception on following code
try
{
CloudBlobContainer container = AzureHelper.GetContainer(containerName, accountName, accountKey);
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(AzureHelper.GetConnectionString(accountName, accountKey));
return UploadBlob(fileContent, storageAccount, containerName, blobName);
}catch(Exception e)
{
WriteInformationMessage("exception at UploadBlob11 =" + e.Message);
return false;
}
This issue is encounter in one of my client site and he is saying they have a proxy in their local network. Proxy name is bluecoat proxy SG 900
How to get rid of this?
I was experiencing similar issues, not only in Azure Storage, but also on the entire website.
In order to disable the default proxy used by Azure Storage, which happens to be the default proxy used by the HttpClient class, I changed Web.config, by adding the defaultProxy element:
<configuration>
<system.net>
<defaultProxy enabled="false"></defaultProxy>
</system.net>
</configuration>
If you must configure the default proxy, instead of disabling it, you can also do so within that same element, according to the docs: https://msdn.microsoft.com/en-us/library/kd3cf2ex(v=vs.110).aspx
I created a web app in Azure. This was a simple web app created using microsoft asp .net. I downloaded my azure profile and published to the azure web app using visual studio. There is an image folder in the web app. When I published all the images uploaded like a charm. Then I used a WPF smartclient app using a web client object and set its Credentials to network credentials along with the user id and the password of my azure account. But when the line reaches the webclient.upload kind of method, I am getting a 401 unauthorized exception. It looks to me that when I try to upload my credentails are not taken as correct. If it were IIS, I know what to do. But in AZURE I am not sure how I can give an anonymous user and access to upload the image. Any comments or points to be considered here?
If your images are not a static part of your application but instead they can be created from your application (example user uploading his picture) I would recommend using Azure Storage instead of file system (you wont loose images uploaded by your users after you do next deploy).
Azure Storage can be easily managed both from code as well as using GUI management tools like CloudBerry Explorer.
adding name space
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Auth;
using Microsoft.WindowsAzure.Storage.Blob;
2.Adding class
public class blobservice
{
public CloudBlobContainer GetCloudBlobContainer()
{
string connString = "DefaultEndpointsProtocol=https;AccountName="";AccountKey=E"";";
string destContainer = "mysample";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connString);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer blobContainer = blobClient.GetContainerReference(destContainer);
if (blobContainer.CreateIfNotExists())
{
blobContainer.SetPermissions(new BlobContainerPermissions
{
PublicAccess = BlobContainerPublicAccessType.Blob
});
}
return blobContainer;
}
}
3..aspx.cs
blobservice _blobServices = new blobservice();
protected void Page_Load(object sender, EventArgs e)
{
blobservice _blobServices = new blobservice();
Upload();
}
public void Upload()
{
CloudBlobContainer blobContainer = _blobServices.GetCloudBlobContainer();
CloudBlockBlob blob = blobContainer.GetBlockBlobReference("Sampleblob.jpg");
WebClient wc = new WebClient();
byte[] bytes = wc.DownloadData(Server.MapPath("~/Images/active.png"));
using (Stream ms = new MemoryStream(bytes))
{
blob.UploadFromStream(ms);
}
}
protected void btnDelete_Click(object sender, EventArgs e)
{
string Name = "https://bikeimages.blob.core.windows.net/mysample/Sampleblob.jpg";
Uri uri = new Uri(Name);
string filename = System.IO.Path.GetFileName(uri.LocalPath);
blobservice _blobServices = new blobservice();
CloudBlobContainer blobContainer = _blobServices.GetCloudBlobContainer();
CloudBlockBlob blob = blobContainer.GetBlockBlobReference(filename);
blob.Delete();
}
I am using asp.net webapi
I occasionally get this error when I make a http request to a Webapi endpoint that returns a Task
Within this response, I use an await to make sure that the thread doesn't close until all threads have completed their task.
Essentially I have a Post method:
public async Task<HttpResponseMessage> PostHouse(SessionModal sessionModal)
{
// create database context
// Create a new house object
House h = new House();
h.address= await GetFormattedAddress(sessionModal.location)
//save database
}
public async Task<FormattedAdress> GetFormattedAddress(DBGeography Location)
{
var client = new HttpClient();
// create URI
var URI = new URi......
Stream respSream= await client.GetStreamAsync(URI)
// Data Contract Serializer
// Create Formatted Address
FormatedAddress address =....
return address;
}
The code works for a while, but over time I begin to get the error response:
"An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full"
If I restart the server, the issue is temporarily relieved. If I am consistantly getting this error, however I attempt to POST,GET,Delete or PUT a different controller endpoint, it will work. But if I go back and attempt to post the endpoint PostHouse, I will still get the error.
Is it possible that the thread isn't being disposed thus, the port being used never gets freed?
EDIT
Here is my exact code that I am trying to fix. I attempted to use using(){} However I still get the error. I think there is something am forgetting to dispose of. Pretty much I am getting a image from a Post, than I send it to Blob storage. If there is a better way of doing this, I wouldn't mind those suggestions either.
public async Task<HttpResponseMessage> Post()
{
var result = new HttpResponseMessage(HttpStatusCode.OK);
if (Request.Content.IsMimeMultipartContent())
{ try{
await Request.Content.ReadAsMultipartAsync<MultipartMemoryStreamProvider>(new MultipartMemoryStreamProvider()).ContinueWith((task) =>
{
MultipartMemoryStreamProvider provider = task.Result;
foreach (HttpContent content in provider.Contents)
{
using (Stream stream = content.ReadAsStreamAsync().Result) {
Image image = Image.FromStream(stream);
var testName = content.Headers.ContentDisposition.Name;
String[] header = (String[])Request.Headers.GetValues("userId");
int userId = Int32.Parse(header[0]);
using (var db = new studytree_dbEntities())
{
Person user = db.People.Find(userId);
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
ConfigurationManager.ConnectionStrings["StorageConnectionString"].ConnectionString);
//CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
// CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve a reference to a container.
CloudBlobContainer container = blobClient.GetContainerReference("profilepic");
// Create the container if it doesn't already exist.
container.CreateIfNotExists();
container.SetPermissions(new BlobContainerPermissions
{
PublicAccess =
BlobContainerPublicAccessType.Blob
});
string uniqueBlobName = string.Format("image_{0}.png",
header[0]);
CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName);
user.ProfilePhotoUri = blob.Uri.ToString();
user.Student.ProfilePhotoUri = blob.Uri.ToString();
user.Tutor.ProfilePhotoUri = blob.Uri.ToString();
using (var streamOut = new System.IO.MemoryStream())
{
image.Save(streamOut, ImageFormat.Png);
streamOut.Position = 0;
blob.UploadFromStream(streamOut);
db.SaveChanges();
}
}
}
}
});
}
catch (Exception e)
{
JObject m= JObject.Parse(JsonConvert.SerializeObject(new {e.Message,e.InnerException}));
return Request.CreateResponse<JObject>(HttpStatusCode.InternalServerError,m);
}
return Request.CreateResponse(HttpStatusCode.OK);
}
else
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.NotAcceptable, "This request is not properly formatted"));
}
}
You aren't disposing your Stream.
Try this
public async Task<FormattedAdress> GetFormattedAddress(DBGeography Location)
{
using(var client = new HttpClient())
{
var URI = new URi......
using(Stream respSream= await client.GetStreamAsync(URI))
{
FormatedAddress address =....
return address;
}
}
}
What is going on is that you are leaving your network streams open, and your computer has a limited number of connections it can make. By not disposing them you hold on to them. So future requests can't use those connections.