Does Connecting to Remote WMI from an ASP.NET Page using Constrained Delegation Require Protocol Transition? - c#

I'm working on an older web app that I did not originally build, but now maintain. It is a classic ASP app with ASPX pages mixed in.
The site is setup for Kerberos authentication and delegation, and that is working properly to other boxes (e.g. I can run a SQL query against a back-end server from an ASP page in the site and it connects using the front-end client's credentials properly). So SPNs are registered, delegation privileges are configured in AD, etc.
Now, the part I'm having trouble with is an ASPX page which invokes a remote WMI call to check on the status of an IIS website, using the \root\WebAdministration WMI namespace. The ASPX page is itself invoked by way of an XHR which resides in the client-side code of a different ASP page. The ASPX, when invoked, makes the WMI call, then Response.Write's back the necessary data, which the originating ASP page then utilizes to populate the page that the user sees. The problem is, I cannot get the IIS box to properly delegate the user's credentials to the back-end machine that its making the WMI call against.
This all works properly (including the constrained delegation), but only if I enable protocol transition. If I set the delegation on the middle-tier (IIS) box to use only Kerberos authentication, it fails (I get an anonymous logon attempt on the back-end box).
Now, I've done numerous packet captures on both the front-end client and the IIS box to see exactly what is going on here, and I can see several things:
The front-end client is properly getting its Kerberos ticket, and presenting it to the IIS box for authentication.
The IIS box is accepting the Kerberos ticket from the client.
However, the IIS box is not using the ticket received from the client as the "evidence ticket" that it should be presenting to the KDC in order to obtain a service ticket to access the back-end service on behalf of the front-end user. Instead, the IIS box is using a S4U2Self call to the KDC to obtain a ticket on the front-end user's behalf for itself, then using that ticket in the subsequent S4U2Proxy call to try and obtain the ticket to the back-end. This is where the problem lies.
The behavior noted above is why this works when protocol transition is enabled, and does not work when it is not.
I cannot figure out for the life of me, why the IIS box feels the need to obtain a TGS for itself to use as the "evidence ticket" to get the ticket for accessing the back-end, instead of simply using the ticket presented by the client. There is nothing invalid about the client's ticket from what I can tell, and the client is establishing a Kerberos-authenticated connection with the web server just fine, so there should be no need for protocol transition here. I could enable it if really needed, but I really just want to know why its necessary (if there is valid reason and this is by design then so be it).
The IIS app pool is running as the built-in app pool identity, and the delegation settings are thus configured on the IIS machine account in AD. SPNs are registered for the site against the IIS machine account, and for the back-end services against those service and/or machine accounts, and the "allowedToDelegateTo" list is configured on the IIS machine account, allowing constrained delegation to the necessary services. The specific SPN we are trying to delegate creds to in this scenario is RPCSS/[machine] for the WMI call. I've verified via the packet capture that the SPN in the request matches the SPN in the A2D2 list exactly (of course, if it didn't, then it wouldn't be working when protocol transition was enabled anyway).
As for the actual WMI connection code, I've tried a few ways. One was something like this:
ConnectionOptions co = new ConnectionOptions();
// I did try ImpersonationLevel set to both Impersonate and Delegate, but I don't think I need
// Delegate here because I'm not delegating from the remote WMI machine to a different box; instead,
// I'm delegating from the IIS box to the remote WMI machine.
co.Impersonation = ImpersonationLevel.Impersonate;
co.Authentication = AuthenticationLevel.PacketPrivacy;
co.EnablePrivileges = true;
// Tried this for the Authority line because I noticed in the packet captures that the principal
// specified here becomes the SPN that is used in the S4U2Proxy request.
co.Authority = "kerberos:RPCSS/machine.domain.com";
ManagementScope ms = new ManagementScope(#"\\machine.domain.com\root\WebAdministration", co);
Then I also tried this:
ConnectionOptions co = new ConnectionOptions();
co.Impersonation = ImpersonationLevel.Impersonate;
co.Authentication = AuthenticationLevel.PacketPrivacy;
co.EnablePrivileges = true;
// I also tried this for the Authority line based on various other code examples for similar
// issues, but this resulted in an incorrect SPN being used in the request.
co.Authority = #"kerberos:DOMAIN\machine";
ManagementScope ms = new ManagementScope(#"\\machine.domain.com\root\WebAdministration", co);
I also tried the same as above, but without an Authority line, and the correct SPN was used in the request but it still didn't work.
Finally, I also tried this, with no ConnectionOptions object at all, hoping it would just pass on the default creds:
ManagementScope ms = new ManagementScope(#"\\machine.domain.com\root\WebAdministration");
Any help here on either how I can get this working without enabling protocol transition, or info on why this setup would require protocol transition, would be very much appreciated!

Related

How do I keep/store my azure credentials?

I have this web app which access a keyvault stored in Azure cloud.
To access this KeyVault I use the IConfigurationBuilder Extension
configuration.AddAzureKeyVault(new Uri(KeyvaultUri), new DefaultAzureCredential(true));
I have created an managed identity for all the user who need access to this, meaning they should be able to run the application and have access to the keyvault once they are logged in via SSO, which they currently are forced to do everytime they start the application due to new DefaultAzureCredential(true) What I don't understand is why it everytime need to be requested everytime, and not store the credentials somewhere after it has been entered once, and use that stored credential, can I somehow locally store the required credentials after the initial login?
It is sort of inconvenient to always login when one start their application, and debugging application becomes a bit lengthy with the required login.
Is somehow possible to let the login happen in the background - or somehow store the credentials after first login?
I feel a bit this is getting off tracked - the solution I am seeking should be applicable for those running the solution via a terminal, outside of visual studio.
Such as frontend developers - who just need a backend to make reqeuest to a nothing else.
It has no sense to cache the token since it is used only once at startup, what you are looking for is to exclude in your credentials all the wrong ways you are trying to grab the token to connect to your AKV except the one you are really using.
To configure it correcly and do not wait 15 seconds in your startup you should configure DefaultAzureCredentials this way:
DefaultAzureCredential credentials = new DefaultAzureCredential(new DefaultAzureCredentialOptions
{
ExcludeEnvironmentCredential = true,
ExcludeInteractiveBrowserCredential = true,
ExcludeAzurePowerShellCredential = true,
ExcludeSharedTokenCacheCredential = true,
ExcludeVisualStudioCodeCredential = true,
ExcludeVisualStudioCredential = true,
ExcludeAzureCliCredential = true,
ExcludeManagedIdentityCredential = false,
});
Exclude all posibilities to grab the token except the one you are using, in this case "Managed Identity" or in other cases AzureCliCredentials.
Regards.
Can you share a small full code example ?
What about using DefaultAzureCredentialOptions.
Like:
.ConfigureAppConfiguration((context, config) =>
{
var appSettings = config.Build();
var credentialOptions = new DefaultAzureCredentialOptions();
var credential = new DefaultAzureCredential(credentialOptions);
config.AddAzureKeyVault(new Uri(appSettings["Url:KeyVault"]), credential);
})
I assume you've got different Azure subs for non-prod and prod (that contain a non-prod and prod key vault instances).
In the non-prod key vault instance, create a new role assignment with the required members i.e. the developer AAD accounts.
Given you're using DefaultAzureCredential, these members will be able to leverage Azure service authentication in Visual Studio or Visual Studio Code (or EnvironmentCredential) as DefaultAzureCredential will cycle through these various credential types including VisualStudioCredential and VisualStudioCodeCredential (and EnvironmentCredential) respectively.
Developers can authenticate in their IDE e.g. in Visual Studio by going to Tools > Options > Azure Service Authentication where they can authenticate using their Azure credentials.
Assuming their AAD accounts have been granted access to the (non-prod) key vault instance, they will be able to get access.
The deployed application - presumably running in Azure - would use a different credential type e.g. ManagedIdentityCredential or EnvironmentCredential. Given these are also handled by the DefaultAzureCredential, no code changes would be required for this to work for the deployed instance of your app.
The only difference with the prod key vault instance is that you probably wouldn't create role assignments for the developer accounts.
the solution I am seeking should be applicable for those running the solution via a terminal, outside of visual studio. Such as frontend developers - who just need a backend to make reqeuest to a nothing else.
For these type of users that just want to run the app from the terminal, they could set some environment variables that will get picked up by the EnvironmentCredential (which is another one of the credential types included in the DefaultAzureCredential) e.g. if they're running the app in docker they could specify AZURE_USERNAME and AZURE_PASSWORD environment vars (or alternatively AZURE_CLIENT_ID and AZURE_CLIENT_SECRET for more of a 'machine' context) e.g. docker run -e AZURE_USERNAME=username -e AZURE_PASSWORD=password ...
Based upon your question, I am assuming the following:
The web application has a managed identity and has
permissions to your key vault.
The users are logging in as themselves
in the front end of the web application.
The new DefaultAzureCredential(true) just grabs the users current credentials. This will be the front end user. These are cached automatically based upon your organization's security policy. I assume this is working correctly.
The login frequency is out of your control as a developer. The issue you are having is in the organizational settings in Azure Active Directory. Your sign in frequency may be set to one of these:
Require reauthentication every time
Sign-in frequency control every time risky user
To fix your issue, set the sign-in frequency to be less than that and you should be good. (I don't have access to this or I would post better pictures)
Here is the link to the full article on how to do this:
Configure authentication session management with Conditional Access
This was what I was looking for
https://github.com/Azure/azure-sdk-for-net/issues/23896
A silent authentication step that stores the credentials first time it is entered into a cache file and then reuses it when rerunning the application the second time without prompting the user to enter credentials.
Thus ensuring credentials arent stored in git, and cache file stored locally on each developers local machine.

Token access blocked when posting request from published Azure function

I am struggling to get a token from "https://login.microsoftonline.com/common/oauth2/token" with an Azure function by a post-request. The token will give permissions to access SharePoint though CSOM. Here is my code snippet with the post request:
var clientId = defaultAADAppId;
var body = $"resource={resource}&client_id={clientId}&grant_type=password&username={HttpUtility.UrlEncode(username)}&password={HttpUtility.UrlEncode(password)}";
using (var stringContent = new StringContent(body, Encoding.UTF8, "application/x-www-form-urlencoded"))
{
var result = await httpClient.PostAsync(tokenEndpoint, stringContent);
var tokenResult = JsonSerializer.Deserialize<JsonElement>(result);
var token = tokenResult.GetProperty("access_token").GetString();
}
When testing locally, both when running the function in Visual studio and when I try with Postman, I am able to achieve an access token. However, as soon as I publish the function to my Function app in Azure I receive the following error message:
"AADSTS53003: Access has been blocked by Conditional Access policies. The access policy does not allow token issuance"
I have enabled an app registration in the portal and as mentioned, it all works fine until I publish everything to Azure.
Any ideas on how to solve this?
I got it to work now. First of all I reviewed the CA policies as #CaseyCrookston suggested. What I found out was that our CA policies blocked calls outside the country we operate from. However, the calls from the App registration/Azure function were registered from the Azure data centre location and thus, blocked by our CA policies. When running them locally the calls where registered in my country and therefore no errors were showing while debugging.
My first step was trying to add my Client app to the CA policy, which was not possible. The client/secret authentication that I used based on the suggestions in this CSOM guide by Microsoft prevented the App registration to be whitelisted from the CA policies (Github issue).
Based on this I had to change the authentication to a Certificate-based authentication as suggested here: Access token request with a certificate and here: SO answer. With this I was able to whitelist the App registration in the CA policies and successfully authenticate to the Sharepoint CSOM.
As the error message says, your app is blocked by CA policy. Possible causes can be unknown client app, blocking external IP addresses, etc.
You can perform one of the below workarounds:
Add your Client app to your CA policy.
I wouldn’t recommend this because this affects your security - if you take the risk you could exclude the “Microsoft Azure Management” from your CA policy which blocks unknown clients / requires device state and still protect the sign-in with MFA.
A better approach is to use another OAuth 2.0 and OpenID connect flow like the delegated flow where you sign-in directly within the app, if possible.

Authentication/Cognito SDK not working once deployed to AWS Lambda

I've integrated AWS Cognito User Pools into my app, as outlined in this article: http://snevsky.com/blog/dotnet-core-authentication-aws-cognito using these packages: AWSSDK.Core and AWSSDK.CognitoIdentityProvider.
In my dev environment, it works well: I can call AdminInitiateAuthAsync to authenticate a user, and I can call SignUpAsync to create a new user. Other methods work well too--in my dev environment.
However, when I deploy my code to Lambda, it doesn't work. Specifically, it's hanging on this line:
var response = await cognito.AdminInitiateAuthAsync(request);
Eventually, I get an error in CloudWatch saying Task timed out. However, it doesn't tell me why. Based on my past experience with Lambda and AWS, I assume it's a permissions issue between Lambda and Cognito, but this is just a guess.
A couple things I've tried:
As outlined in the article, I added two dev environment settings: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. However, Lambda won't let you set these specific keys as Environment Variables. So, as a test, I tried setting these explicitly in my code:
var credentials = new BasicAWSCredentials("myAccessKey", "mySecretKey");
var region = RegionEndpoint.GetBySystemName("myRegionId");
var cognito = new AmazonCognitoIdentityProviderClient(credentials, region);
I added admin permissions to the Lambda Execution role.
Any help appreciated.
A couple things:
You don't need to provision your API keys in lambda code. AWS does it for you, given that your code actually runs in a secured sandbox.
Your cognito user pool actually has a public DNS name, therefore when you hit any API request to it (even via AWS SDK), your traffic goes through public internet. That means that lambda should be able to send traffic to public internet. Given that you face "Task timed out error", the most common case of such error is that you put your function into VPC, but didn't provision a NAT instance/Gateway, Route table rules and necessary security groups to allow lambda to communicate to public internet. If you check your infrastructure setup and will end up seeing that your lambda is in a VPC, make a decision whether you really need it to be there. If you don't, get it out of there and most likely your cognito requests will work immediately. But if you do, you have to configure NAT, security groups and route table records.

Windows Authentication Impersonation - Second request gets wrong user identity

I have the following architecture:
Client1(Browser-App) -> Server1 (WebAPI/IIS) -> Server2 (WebAPI/IIS)
I am using ASP.NET for my server-side applications/apis and the user should be authenticated via "windows integrated authentication".
As you can see there is a second hop from server1 to server2. NTML does not support the second hop if both WebAPIs are not on the same server.
So I configured an AD domain to support "kerberos".
It works now with the second hop.
My test-WebAPIs output the identity of the user like this:
server1: test.domain/user1
server2: test.domain/user1
But if I change the logged-in user on Client1 and execute the same request as "otherUser2", only the first hop gets the correct identity:
server1: test.domain/otherUser2
server2: test.domain/user1
On the second hop the old user of the first request is displayed.
I tested multiple scenarios: Same behaviour if the following requests come from another client with another windows user...
It looks like the windows identity of the first request is cached on the server2... This is a big problem for me and I think this should not be possible... It's a big security hole if a request is executed in the wrong user context!
Is this a known problem? Did I do something wrong?
Is there a solution or a better configuration?
On the first ASP.NET WebAPI I use impersonation like this:
WindowsIdentity identity = (WindowsIdentity)HttpContext.Current.User.Identity;
using (var wic = identity.Impersonate())
{
try
{
WebClient c = new WebClient
{
UseDefaultCredentials = true
};
I use the WebClient class of .NET.
Both IIS server have "Windows Authentication" with "Negotiation" and "NTML" configured.
Server1 is the DomainController, DNS and DHCP-Server (+IIS)
Server2 is only a normal server with IIS installed.
All computers are in the same domain.
I cannot explain me this behavior... It makes no sense to me. Why should the first incoming request's identity should be cached on 'server2'?
If I restart the IIS and re-execute the requests with another windows identity, this is the "first working request" and the others get his identity on 'server2'.
I found the solution/problem.
It was in fact a caching problem... The identity of the first user was cached.
You can change this behavior with this "IIS settings":
authPersistNonNTLM
authPersistSingleRequest
Or your HTTP-Client at API1 can disable TCP-Connection caching:
Connection: close
instead of
Connection: keep-alive
But the actual problem in my scenario was fiddler (a HTTP proxy tool).
I configured fiddler as proxy in the web.config at API1. This kept the connection open and the first identity was reused...
I hope I can help some others with this answer.

TFS API TF30063: You are not authorized to access http://

I have an asp.net site on IIS using windows authentication (pass through) and I am trying to connect to the TFS API programmatically.
When I run it on my dev machine all is fine but once the site is on IIS I keep getting {"TF30063: You are not authorized to access http://mytfsserver."}
I have debugged the live site and it seems like it always takes the user as "NT SYSTEM" instead of the actual logged in user.
If I put my account details for the application pool it works as expected.
Any idea on how I can bypass this?
Code where it fails:
Uri collectionUri = new Uri(rootWebConfig.AppSettings.Settings["TFS_TEST_URI"].Value); //TEST ENV
tpc = new TfsTeamProjectCollection(collectionUri, CredentialCache.DefaultNetworkCredentials);
tpc.Authenticate();
workItemStore = tpc.GetService<WorkItemStore>();
You are hitting a standard active directory double hop authentication issue.
You have two options:
Username & password - if you ask the user to physically enter their username and password you can authenticate as them.
Kerberos - if you enable and configure Kerberos you can enable passthrough authentication. You need properly configured SPN: http://blogs.technet.com/b/askds/archive/2008/06/13/understanding-kerberos-double-hop.aspx
I would go with kerberos tokens. It's a pain to configure but works a treat. Your only other alternative is to run your web app on the TFS server and bypass double hop.

Categories