I have been using c# code to get blob items for the past few days, however, with no changes to the way the program gets the blob data, it stopped working. I run into the same error every time I run now:
"EnvironmentCredential authentication unavailable. Environment variables are not fully configured"
Here is the code I am using to connect to Azure:
Uri accountUri = new Uri(mystorageurl);
BlobServiceClient client = new BlobServiceClient(accountUri, new DefaultAzureCredential(true));
BlobContainerClient container = client.GetBlobContainerClient(blobname);
BlobClient bundle = container.GetBlobClient(itemname);
What I've been confused by is that if I run this same code in a separate vs solution, I get no error getting the files from Azure. I've also tried sending the same solution that's getting the error to another person and they were able to run it without issue. I know it isn't an issue with environment variables, since it used to work up until now and they haven't been modified in any way.
This unresolved issue on github is most similar to what I've encountered:
https://github.com/Azure/azure-sdk-for-net/issues/16079
It worked fine when you never set Environment variables, it means that you didn't use EnvironmentCredential. The DefaultAzureCredential will attempt to authenticate via the following mechanisms in order, and Environment is the first one.
If you just use Environment to authenticate, it's better to use EnvironmentCredential instead of DefaultAzureCredential. And it's necessary to set the following variables.
AZURE_CLIENT_ID: id of an Azure Active Directory application
AZURE_TENANT_ID: id of the application's Azure Active Directory tenant
AZURE_CLIENT_SECRET: one of the application's client secrets
Related
I am currently writing a microservice in .NET Standard that writes a Dockerfile, builds the Docker image associated with that Dockerfile (which implicitly causes a pull image), and pushes the image to a private Docker registry. Those Docker operations are all performed using the Docker.Dotnet library that MSFT maintains. I believe that this is mostly just a wrapper around calls to the Docker Remote API. The execution context of this microservice is in a K8s cluster hosted on AWS, or internally on bare-metal, depending on the deployment.
Previously our Docker registry has just been a private registry hosted on Artifactory internally, but we are migrating to a private DockerHub registry/repository/. With this migration have come some authentication problems.
We authenticate all of the pull and push operations with an AuthConfig that consists of the username and password for the account associated with the registry. The AuthConfig is either added to a Parameters object and then passed to the call:
imageBuildParameters.AuthConfigs = new Dictionary<string,
AuthConfig>() { { DockerEnvVariables.DockerRegistry, authConfig } };
…
using (var responseStream = _dockerClient.Images.BuildImageFromDockerfileAsync(tarball, imageBuildParameters).GetAwaiter().GetResult())
Or it’s (strangely, to me) both passed in a parameter and separately to the call:
ImagePushParameters imagePushParameters = new ImagePushParameters() { ImageID = image.Id, RegistryAuth = authConfig, Tag = "latest" };
_dockerClient.Images.PushImageAsync(RepoImage(image.Id), imagePushParameters, authConfig, this).GetAwaiter().GetResult();
We are currently getting auth errors for any of the various Docker registries I’ve tried for DockerHub, as such (where I’ve redacted the organzation/namespace and image:
{“message”:“pull access denied for registry-1.docker.io//, repository does not exist or may require ‘docker login’: denied: requested access to the resource is denied”},“error”:“pull access denied for registry-1.docker.io//, repository does not exist or may require ‘docker login’: denied: requested access to the resource is denied”
The list of DockerHub urls that I’ve tried is following, all with either the error above or a different “Invalid reference format” error:
Hub.docker.io
Hub.docker.io/v1/
Docker.io
Docker.io/v1/
Index.docker.io
Index.docker.io/v1/
Registry.hub.docker.com
registry-1.docker.io
Strangely enough, if I run it locally on my Windows system, the bolded URL’s actually work. However, they all fail when deployed to the cluster (different method of interacting with the Docker socket/npipe?).
Does anyone know the correct URL I need to set to properly authenticate and interact with DockerHub? Or if my current implementation and usage of the Docker.Dotnet library is incorrect in some way? Again, it works just fine with a private Docker registry on Artifactory, and also with DockerHub when run on my local Windows system. Any help would be greatly appreciated. I can provide any more information that is necessary, be it code or configuration.
I have an azure function that uses the Azure context. When I execute my function from visual studio 2019 on my machine, it executes correctly. However when I publish this to my Azure account, I get an error that the my.azureauth file cannot be found.
Could not find file 'D:\Program Files (x86)\SiteExtensions\Functions\2.0.12950\32bit\my.azureauth'
The code that is used:
var authFilePath = "my.azureauth";
Console.WriteLine($"Authenticating with Azure using credentials in file at {authFilePath}");
azure = Azure.Authenticate(authFilePath).WithDefaultSubscription();
sub = azure.GetCurrentSubscription();
Console.WriteLine($"Authenticated with subscription '{sub.DisplayName}' (ID: {sub.SubscriptionId})");
This is code that I found on one of the Microsoft tutorials. I have set my my.azureauth file to "Copy Always".
Could anyone point my in the right direction?
You are get this file path because the Directory.GetCurrentDirectory() would return D:\Program Files (x86)\SiteExtensions\Functions\2.0.12950\32bit instead of D:\home\site\wwwroot\ or D:\home\site\wwwroot\FunctionName.
And if you want to get the wwwroot folder or the function app directory you should use ExecutionContext. Further more information you could refer to this wiki doc.
So the right file path should be context.FunctionDirectory+"\my.azureauth" or context.FunctionAppDirectory+"\my.azureauth", which one to use depends on where your file is stored.
I have found that Kudu is extremely useful in seeing what has been deployed to Azure.
Navigate to your function in the Azure portal.
The instructions here will help get to the kudu console.
https://www.gslab.com/blogs/kudu-azure-web-app
From there you can browse the files which have been deployed into your function's file system.
If you add " , ExecutionContext context)" at the end of the function's run entry point, you can then get the folder which your function is running from with "var path = context.FunctionAppDirectory;
PS apologies for any formatting I am editing this on my phone.
Welcome to Stackoverflow.
Firstly, I'd recommend strongly against using file-based authentication as shown in your question.
From notes:
Note, file-based authentication is an experimental feature that may or may not be available in later releases. The file format it relies on is subject to change as well.
Instead, I would personally store the connection string details (AzureCredentials) in the config file (Web/SiteSettings) and use the provided constructor...
Again, the below are taken from the documentation notes:
Similarly to the file-based approach, this method requires a service principal registration, but instead of storing the credentials in a local file, the required inputs can be supplied directly via an instance of the AzureCredentials class:
var creds = new AzureCredentialsFactory().FromServicePrincipal(client, key, tenant, AzureEnvironment.AzureGlobalCloud);
var azure = Azure.Authenticate(creds).WithSubscription(subscriptionId);
or
var creds = new AzureCredentialsFactory().FromServicePrincipal(client, pfxCertificatePath, password, tenant, AzureEnvironment.AzureGlobalCloud);
var azure = Azure.Authenticate(creds).WithSubscription(subscriptionId);
where client, tenant, subscriptionId, and key or pfxCertificatePath and password are strings with the required pieces of information about your service principal and subscription. The last parameter, AzureEnvironment.AzureGlobalCloud represents the Azure worldwide public cloud. You can use a different value out of the currently supported alternatives in the AzureEnvironment enum.
The first example is most likely the one you should be looking at.
The notes I got this information from can be accessed here.
If you have some problems with AAD, these screenshots may help you.
Client ID:
Key:
Please note that the Key value can only be copied when it is created, after which it will be hidden.
Hope this helps you get started with AAD quickly.
This is my first crack at creating an Azure App Function. I've got it working when running on my local dev machine. Now I've deployed it into Azure, and I'm attaching the debugger to it (very cool, btw!).
When running on my localhost, I can use the local.settngs.json file for all of the app settings. That works fine. But I seem to be hitting a roadblock on the simple process of accessing application settings when running the function remotely in Azure.
First (and this is a battle I will fight later) I'm sure there is a way to auto-populate the settings in the Azure Function based on what's in local.settngs.json, but for now, I added them all manually.
For now, let's just take a look at the setting StorageConnectionString who's value looks like this:
DefaultEndpointsProtocol=https; AccountName=[redacted]; AccountKey=[redacted]; EndpointSuffix=core.windows.net
All three of these attempts to get the value work on my localhost, but all three also fail when debugging in the remote Azure function:
string storageConString = ConfigurationManager.AppSettings["StorageConnectionString"];
string storageConString = CloudConfigurationManager.GetSetting("StorageConnectionString");
string storageConString = Environment.GetEnvironmentVariable("StorageConnectionString");
In all cases, I get this this error:
Error message:
Value cannot be null. Parameter name: itemName
Stack Trace:
at Microsoft.WindowsAzure.Storage.Core.Util.CommonUtility.AssertNotNullOrEmpty(String paramName, String value) in c:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\Common\Core\Util\CommonUtility.cs:line 143 at Microsoft.WindowsAzure.Storage.File.CloudFileDirectory.GetDirectoryReference(String itemName) in c:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\Common\File\CloudFileDirectory.Common.cs:line 224 at XXXXXXX.Common.AzureFunctions.ProcessInbound.DoBooksExist(BookSource bookSource) at XXXXXXX.Common.AzureFunctions.ProcessInbound.Run(TimerInfo myTimer, TraceWriter log)
The error has me kind of stumped, as it doesn't even seem to apply. Why, for example, is it calling GetDirectoryReference(String itemName)?
To get an environment variable or an app setting value, use System.Environment.GetEnvironmentVariable, as shown in the following code example:
public static string GetEnvironmentVariable(string name)
{
return System.Environment.GetEnvironmentVariable(name, EnvironmentVariableTarget.Process);
}
App settings can be read from environment variables both when developing locally and when running in Azure. When developing locally, app settings come from the Values collection in the local.settings.json file. In both environments, local and Azure, GetEnvironmentVariable("<app setting name>") retrieves the value of the named app setting. For instance, when you're running locally, "My Site Name" would be returned if your local.settings.json file contains { "Values": { "WEBSITE_SITE_NAME": "My Site Name" } }.
Or you can use System.Environment.GetEnvironmentVariable:
Taken from Azure Functions C# developer reference - Environment variables
Ok, finally got this figured out. The core of the issue is some strange behavior on how the debugger works when attached remotely. I finally figured out the real issue by setting the Publish Configuration to Debug before publishing:
The problem had been that when I had it set to "Release", the debugger would break on the first line of the method that had the error in it, and NOT on the actual line with the error. This lead me to think I was getting the error on a line that was not, actually, throwing the error.
Once I knew the true line that was throwing the error, solving it was a snap.
I'm following the Using Cloud Datastore with .NET tutorial. At some point it says that you can run the provided project locally by just pressing F5. When I do that I get the following exception
Grpc.Core.RpcException: 'Status(StatusCode=PermissionDenied, Detail="Missing or insufficient permissions.")'
This exception is thrown exactly at the _db.RunQuery(query) line.
var query = new Query("Book") { Limit = pageSize };
if (!string.IsNullOrWhiteSpace(nextPageToken))
query.StartCursor = ByteString.FromBase64(nextPageToken);
var results = _db.RunQuery(query);`
If I deploy the application to cloud it works as expected, no error. I've given datastore owner permissions to all accounts in Cloud IAM (Identity and Access Management) but it still doesn't work. Does anybody have any ideas?
As Jon Skeet pointed out, I was using the wrong json key locally. Previously I created a compute engine that has a separate service account. After I downloaded a new json key from Console -> IAM & Admin -> Service Accounts it worked locally as well.
var rawData = Convert.FromBase64String(_signingKey);
var cng = CngKey.Import(rawData, CngKeyBlobFormat.Pkcs8PrivateBlob);
I use this code to extract key, from embedded base64 string.
It works fine when I test it locally but when I publish on azure I get following exception:
WindowsCryptographicException: The system cannot find the file specified
(once again I'm not reading from any file)
I need this to communicate with apple apns for push notifications, is there any workaround?
And this happens only on free service plan, if I switch to basic plan it's working.
I ran into the same error after publishing an existing application to Azure. In my case the problem was solved after I set WEBSITE_LOAD_USER_PROFILE = 1 in App Services / App Name / Application Settings.
Setting WEBSITE_LOAD_USER_PROFILE to equal 1 in the Azure App Service configuration definitely got my remote iOS notifications working. Using dotAPNS for C# .NET I also needed to omit apns.UseSandbox().
It seems that it causes by there is no certificate attached in your Azure Mobile App. If it is that case, we need to upload the "Development" or "Distribution" SSL certificate to the WebApp. More info about how to send push notifications to iOS App, please refer to the azure document.
I've had a similar error trying to construct a X509Certificate2 from a byte array - worked fine locally but once I deploy to Azure Web App, I got the same and VERY misleading file not found exception.
The real issue turned out to be that there was no user store associated with the web service account. You can also get a similar error if there are permission-related errors with accessing the certificate store on Windows.
In any case - In my scenario I fixed the problem by using MachineKeySet:
new X509Certificate2(certRawBytes, default(string), X509KeyStorageFlags.MachineKeySet);
So, in your scenario, try something like:
var keyParams = new CngKeyCreationParameters
{
KeyCreationOptions = CngKeyCreationOptions.MachineKey,
};
CngKey.Create(CngAlgorithm.Rsa, keyName, keyParams);
Note: You may have to set a few parameters to get the above working. The Import method doesn't seem to support MachineKey - but you should be able to achieve similar outcome by using the Create method.
To add to #strohmsn's answer, you can also set the App Service settings with this value directly within Visual Studio on the Publish page for web apps: Right click on web app and select Publish, then select App Service Settings, and you can add setting properties there: WEBSITE_LOAD_USER_PROFILE = 1 in this case. See screenshot:
For making it works, I needed TWO things in AzureWebApp..
So my code is :
//I load the PrivateKey here
ReadedByte = System.IO.File.ReadAllBytes(strPathPrivateKey);
//create the RSA thing
RSA rsa = System.Security.Cryptography.RSA.Create();
//import the key. It crashed HERE with the 'System cannot find file specified'
rsa.ImportPkcs8PrivateKey(source: ReadedByte,bytesRead: out int _);
It works perfectly locally. But, to make it WORK on Azure Web App, I had to have those TWO requirements :
1 - the WEBSITE_LOAD_USER_PROFILE = 1 spoken in the discussion above and below
2 - The App Service Plan must include "Custom domains / SSL" !
...so No 'F1 Share Infrastructure' nor 'D1 Share Infrastructure'. The lowest Service plan that worked for me was 'B1 - 100 Total Acu'.
Maybe I have something wrong somewhere else in my code, or my 'RSA' choice is bad..anyway...
It now works!