I have started implementing the Azure Blob storage into my application and when using the development connection string ("UseDevelopmentStorage=true;") it unfortunately fails when trying to create the container here:
The exception is:
Microsoft.WindowsAzure.Storage.StorageException was unhandled by user code
InnerException:
Message=An error occurred while sending the request.
Source=System.Private.CoreLib
InnerException:
HResult=-2147012867
Message=A connection with the server could not be established
Source=System.Private.CoreLib
....
My code:
public async void UploadBlob(IFormFile file, string containerReference, string blobReference)
{
var container = _blobClient.GetContainerReference(containerReference);
await container.CreateIfNotExistsAsync();
var blockBlob = container.GetBlockBlobReference(blobReference);
using (var fileStream = file.OpenReadStream())
{
await blockBlob.UploadFromStreamAsync(fileStream);
}
}
Any ideas why I would not be able to connect?
Thanks, Nik
I had to start the local storage emulator, which I have totally overseen in the tutorial that I looked at. Thank you for your tip Kenneth!
See here:
https://azure.microsoft.com/en-us/documentation/articles/storage-use-emulator/#authenticating-requests-against-the-storage-emulator
Related
I have tried many ways as in several sites but no luck, I tried to connect it using Google.Cloud.Firestore and Google.Apis.Storage.v1 Nuget packages. The code is given below.
Google.Cloud.Firestore.FirestoreDb db = Google.Cloud.Firestore.FirestoreDb.Create("test");
CollectionReference collection = db.Collection("users");
DocumentReference document = await collection.AddAsync(new { email = "xamemail#12", name = "xamemail" });
When I tried this code one exception occurred like environment variable GOOGLE_APPLICATION_CREDENTIALS not set, then I set GOOGLE_APPLICATION_CREDENTIALS in my windows system as well as in the code as shown below.
System.Environment.SetEnvironmentVariable("GOOGLE_APPLICATION_CREDENTIALS", #"C:\\path-to-json", EnvironmentVariableTarget.Machine);
It's showing another error the file is not found in the path, but its there and I set the permission for the path.
If anyone there to help on this, anyone already using a xamarin - cloud firestore db in your projects?
Please note its not the firebase realtime db, I am able to connect that.
As far I have understood, you can’t use default credentials (GOOGLE_APPLICATION_CREDENTIALS) in an app on a device as it would not be able to find the path for the file located on your PC.
I have found this code, that I think should work, (But so far, I have not managed to succeed with it - I get an exeption, so I hope my answer can help to inspire someone else to find a solution for this)
using Google.Cloud.Firestore;
using Google.Cloud.Firestore.V1;
using Google.Apis.Auth.OAuth2;
using Grpc.Auth;
using Grpc.Core;
...
//First Pack your Jason fil into a Jason string.
//Fore security reasons I'm not sure this is a good idea, but it is what I could think of
string jsonCred = #”{ Your Json cred file (Replace “ with ‘) }”;
// Get you credential. As far as I have understood it must be scoped.
var credential = GoogleCredential.FromJson(jsonCred).CreateScoped(FirestoreClient.DefaultScopes);
// Create a channel and add the channel to the Firestore client
Channel channel = new Channel(FirestoreClient.DefaultEndpoint.Host, FirestoreClient.DefaultEndpoint.Port, credential.ToChannelCredentials());
FirestoreClient client = FirestoreClient.Create(channel);
// Then I think it should be possible to call.
FirestoreDb db = FirestoreDb.Create(projectId, client);
But so far, I in the line:
FirestoreClient client = FirestoreClient.Create(channel):
I get this exception:
System.TypeLoadException: VTable setup of type Google.Cloud.Firestore.V1.FirestoreClientImpl failed at Google.Cloud.Firestore.V1.FirestoreClient.Create (Grpc.Core.Channel channel, Google.Cloud.Firestore.V1.FirestoreSettings settings) [0x0000c] in T:\src\github\google-cloud-dotnet\releasebuild\apis\Google.Cloud.Firestore.V1\Google.Cloud.Firestore.V1\FirestoreClient.cs:622 at Padlelogg.DataHandler.SaveToFireStore[T] (System.String collection, System.Collections.Generic.List`1[T] Datalist) [0x00072] in C:\Users\rad\Documents\Xamarin projects\Padlelogg 2.10\Padlelogg\Data\DataHandler.cs:360 }
This exception I have not been able to resolve so far
I have an ASP.NET Web Forms application.
In one of my form, I am downloading a PDF from Azure and it is displayed using rasteredge (a PDF Viewer), this allow me to add and save annotation on the PDF.
The file with annotation is then saved in a folder at the root of my application (RasterEdge_Cache).
I would like to upload the PDF back into Azure, using the UploadFromFile function.
This is the function that I am using:
public static void UploadFile(DTO.BlobUpload b)
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["SN_ZEUXYS"]);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference(b.Container);
CloudBlockBlob blockBlob = container.GetBlockBlobReference(b.FileName);
// b.FilePath = "~/RasterEdge_Cache/193304798925/output/A-0002-00008-00205Anno.pdf"
blockBlob.UploadFromFile(b.FilePath);
}
This is the error message that I get:
An exception of type 'System.IO.DirectoryNotFoundException' occurred
in mscorlib.dll but was not handled in user code Additional
information: Could not find a part of the path 'C:\Program Files
(x86)\IIS
Express\~\RasterEdge_Cache\193304798925\output\A-0002-00008-00205Anno.pdf'.
I assume that the file path is not correct, what path should I use, or am I using UploadFromFile correctly?
Thank you for your help.
Your file path is a relative path so you should specify where exactly it relates to. Consider using Server.MapPath method to find out its physical path to the root of the application on your server.
HttpContext httpContext = HttpContext.Current;
HttpServerUtility server = httpContext.Server;
b.FilePath = server.MapPath("~/RasterEdge_Cache/193304798925/X.pdf")
An exception of type 'Microsoft.WindowsAzure.Storage.StorageException' occurred in Microsoft.WindowsAzure.Storage.dll but was not handled in user code Additional information: The remote server returned an error: (404) Not Found. So the question is, am I using the correct function: UploadFromFile? blockBlob.UploadFromFile(b.FilePath);
According to this article, the "The remote server returned an error: (404) Not Found." occurs when a upload operation against a container fails because the container or the blob is not found.
So I suggest you could firstly check the "b.Container,b.FileName"'s value is existing or you could use CreateIfNotExists method.
More details, you could refer to below codes.Hope it gives you some tips.
protected void Button5_Click(object sender, EventArgs e)
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse("connection string");
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
//check the b.Container value is right and exists
CloudBlobContainer container = blobClient.GetContainerReference("foobar");
CloudBlockBlob blockBlob = container.GetBlockBlobReference("TestFile.pdf");
HttpContext httpContext = HttpContext.Current;
HttpServerUtility server = httpContext.Server;
string FilePath = server.MapPath("~/test/TestFile.pdf");
//by using this code will create the container if not exists
container.CreateIfNotExists();
blockBlob.UploadFromFile(FilePath);
}
I'm trying to use azure blob storage.
I uploaded some images successfully, but all the sudden I get the error:
An existing connection was forcibly closed by the remote host
I looked into it and the exception is thrown whenever I try to check if a blob container exists.
This is my code:
BlobClient getter property: (note, I have marked sensitive data in the connection string with **)
static string connectionString = "DefaultEndpointsProtocol=https;AccountName=**;AccountKey=**;BlobEndpoint=https://**.blob.core.windows.net/;TableEndpoint=https://**.table.core.windows.net/;QueueEndpoint=https://**.queue.core.windows.net/;FileEndpoint=https://**.file.core.windows.net/";
public static CloudBlobClient BlobClient
{
get
{
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
// Create the blob client.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
return blobClient;
}
}
The actual code throwing the exception:
CloudBlobContainer container = BlobClient.GetContainerReference(containerName);
if (!container.Exists())
To be precise, the exception occurs at the line where I check if the container exists.
I have no idea what's wrong. I am positive that the connection string is right(I copied it in).
I would REALLY appreciate if someone could tell me what the issue possibly could be.
A best practice for scalability is to increase the .NET default connection limit to 100. By default in a client environment the default is 2. The default connection limit must be set prior to opening any connections. For this an other scalability best practice please see the Microsoft Azure Storage Performance and Scalability Checklist.
It can also happen due to a timeout.
In such case, you can use BlobRequestOptions to set the timeout of your choice. (I found it useful in CloudBlobContainer.ListBlobsSegmented method.)
Here is the code example for your question:
CloudBlobContainer container = blobClient.GetContainerReference(containerName);
var containerExists = container.Exists(new BlobRequestOptions() {
ServerTimeout = TimeSpan.FromMinutes(5)
});
if (!containerExists)
// ...
I'm trying to FTP a file from Azure up to some FTP destination.
When I do it locally, it's working great. Ok ...
I publish this webjob up to azure and it's works great until it tries to do the FTP bit.
The following error is shown:
[12/21/2015 03:18:03 > 944464: INFO] 2015-12-21 03:18:03.6127| ERROR| Unable to connect to the remote server. Stack Trace: at System.Net.WebClient.UploadFile(Uri address, String method, String fileName)
[12/21/2015 03:18:03 > 944464: INFO] at System.Net.WebClient.UploadFile(String address, String method, String fileName)
And this is my simple code...
public void UploadFile(FileInfo fileInfo)
{
try
{
using (var client = new WebClient())
{
var destination = new Uri($"ftp://{_server}/{fileInfo.Name}");
client.Credentials = new NetworkCredential(_username, _password);
client.UploadFile(destination, "STOR", fileInfo.FullName);
}
}
catch (Exception exception)
{
// snipped.
throw;
}
}
So it's pretty damn simple.
Could this be some weird PASSIVE/ACTIVE issue? Like, how our office internet is simple while azure's setup is complex and ... as such ... it just cannot create the FTP connection?
Now - this is where things also get frustrating. Originally I was using a 3rd party FTP library and that DID work .. but some files were only getting partially uploaded .. which is why i'm trying to use WebClient instead. So I know that my azure webjob "website" thingy can FTP up. (no blocked ports, etc).
Any clues to what I can do, here?
Following Kristensen's simple example, I could do it. But I'm not sure about the format of your FTP url destination.
Did you try:
var destination = new Uri(string.Format("ftp://{0}/{1}", _server,fileInfo.Name));
Assuming _server is defined somewhere, I don't see it in your code and that your credentials have permissions to write on that location.
I keep getting the following exception when trying to upload a file to Amazon Glacier using the .NET sdk:
System.Collections.Generic.KeyNotFoundException: The given key was not present in the dictionary.
at System.Collections.Generic.Dictionary`2.get_Item(TKey key)
at Amazon.Glacier.Model.Internal.MarshallTransformations.UploadArchiveResponseUnmarshaller.UnmarshallException(JsonUnmarshallerContext context, Exception innerException, HttpStatusCode statusCode)
at Amazon.Runtime.Internal.Transform.JsonResponseUnmarshaller.UnmarshallException(UnmarshallerContext input, Exception innerException, HttpStatusCode statusCode)
at Amazon.Runtime.AmazonWebServiceClient.handleHttpWebErrorResponse(AsyncResult asyncResult, WebException we)
at Amazon.Runtime.AmazonWebServiceClient.getResponseCallback(IAsyncResult result)
at Amazon.Runtime.AmazonWebServiceClient.endOperation[T](IAsyncResult result)
at Amazon.Glacier.Transfer.Internal.SinglepartUploadCommand.Execute()
at Amazon.Glacier.Transfer.ArchiveTransferManager.Upload(String vaultName, String archiveDescription, String filepath, UploadOptions options)
at UClaim.TaskRunner.Tasks.ArchiveDocuments.Execute() in c:\Projects\uclaim\src\UClaim.TaskRunner\Tasks\ArchiveDocuments.cs:line 55
I've got no idea why it's happening or what it means, and googling is turning up nothing. The code I'm using is nothing special, but here it is for completeness.
var document = GetDocumentToArchive();
var manager = new ArchiveTransferManager(Amazon.RegionEndpoint.EUWest1);
document.ArchiveId = manager.Upload(
"archivedDocs",
string.Format("#{0}: {1}", document.Claim.Id, document.Description),
document.GeneratePathOnServer()).ArchiveId;
Ok turns out this was a stupid mistake. I thought that the SDK would create the vault if it didn't exist but I guess it was attempting to look it up and failing. I logged in to the management console and created the "archivedDocs" vault and now it runs fine