I recently encountered an odd problem with RSACryptoServiceProvider.VerifyHash.
I have a web application using it for decryption. When users running the web service were doing so over our VPN it became very very slow. When they had no connection or a internet connection they were fine.
After much digging I found that every time RSACryptoServiceProvider.VerifyHash is called it makes an LDAP request to check MyMachineName\ASPNET.
This doesn't happen with our WebDev (cassini based) servers as they run as the current user, and it is only really slow over the VPN, but it shouldn't happen at all.
This seems wrong for a couple of reasons:
Why is it checking the domain controller for a local machine user?
Why does it care? The encryption/decryption works regardless.
Does anyone know why this occurs or how best to work around it?
From this KB it looks like a 'wrinkle' in the code that needs sorting:
http://support.microsoft.com/kb/948080
Thanks (+1 & ans)
Tested and works.
From the KB article:
The SignData or VerifyData methods
always perform an OID lookup query
which is sent to the domain
controller, even when the application
is running in a local user account.
This may cause slowness while signing
or verifying data. Logon failure
audit events occur on the DC because
the client machine's local user
account is not recognized by the
domain. Therefore, the OID lookup
fails.
This is exactly what we were seeing.
We changed this line:
rsa.VerifyHash( hashedData, CryptoConfig.MapNameToOID( "SHA1" ), signature );
To this:
rsa.VerifyHash( hashedData, null, signature );
And that fixed it.
Related
My application's main function is to change a Google G-Suite user's password using Google's Google.Apis.Admin.Directory.directory_v1 nuget package.
The API call works 95% of the time (and resets a target user's password), but intermittently, the API call throws an exception with the Message text:
Precondition Failed [412] Errors [ Message[Precondition Failed] Location[If-Match - header] Reason[conditionNotMet] Domain[global] ]
I've done lots of research and it seems that there is a client-specified pre-condition being included in a (REST?) call that the API is making toward the Google API server and the server is determining that the condition is not being met (see https://www.rfc-editor.org/rfc/rfc7232#section-4.2) or the state of the object being changed is bad (https://developers.google.com/calendar/v3/errors ). The strange thing is, everything does work nearly all of the time, but then fails every now and then. It really seems like it is some kind of a resource based error (too many calls submitted recently, too many users licensed in the domain) or maybe bad data (bad or missing password, bad user) or even permissions (user is in a group/OU that can'b be managed). But the error message gives nothing to go on and I've mostly ruled out the most obvious of the possibilities. I've googled the exact message and found numerous people with similar complaints, but no documented causes.
Correction from original: I am able to capture REST calls with Fiddler (with https capture configured), but I can't reproduce the original error while capturing, so it doesn't help much.
Any suggestions for how to reproduce and/or troubleshoot the issue?
Here is the code (please ignore any obvious typos-I had to cut/paste/merge from a few sources to assemble a small simple example)--the real code definitely works nearly all of the time:
{
userEmail = googleUser + "#" + domain; // e.g. BobSmith#myGoogleDomain.com
// service is an instance of Google.Apis.Admin.Directory.directory_v1.DirectoryService
var userget = service.Users.Get(userEmail);
User userob = userget.Execute();
userob.ChangePasswordAtNextLogin = false;
userob.Password = password;
patchRequest=service.Users.Patch(userob, userEmail);
patchRequest.Execute();
}
catch (Exception e)
{}
I'm having issues with .NET's UserPrincipal.GetGroups() method.
On nearly all system in my domain I can call
var groups = UserPrincipal.Current.GetGroups().ToArray()
and it returns the groups the current user is in. But there is one Windows 2008 R2 Enterprise Server which crashes when executing this with the message:
The server is not operational.
Name: TESTDOMAIN.ORG
I think that this server has a different configuration somehow but it's part of the same domain.
Console.WriteLine(
new DirectoryEntry("LDAP://RootDSE")
.Properties["defaultNamingContext"]
.Value
.ToString()
);
Shows the same on all systems: DC=GLOBAL,DC=TESTDOMAIN,DC=ORG
Where could I look ? What could be the problem ? How to solve it ?
I finally found it.
The problem is, that the server did not know what standard gateway to use.
Solution is to go to network options, select the LAN interface which is used, edit the IPv4 entry and set a standard gateway. This way the network will no longer be shown as "unidentified network" under network neighbourhood and all LDAP related queries will work again.
This one struggled me for days, so I hope this answer could help you too.
A server I connect to has recently changed it's SSL certificate. Since the change, SSL authentication is taking in excess of ten seconds to complete when the Certificate Revocation List is downloaded.
I'm using the RemoteCertificateChainCallback to validate the certificate, however the delay occurs BEFORE the callback is called, so it's not the building of the cert chain or any other action there that's causing the delay
The issue only occurs when the CRL is NOT CACHED, i.e. I need to delete the CRL cache (Documents&settings/[user]AppData/Microsoft/CertificateUrlCache or something similar) to repro it more than once on a single day.
If I disable CRL checking in the AuthenticateAsClient() call, the authentication is quick.
Using a network sniffer, I can see that when the CRL is eventually requested, it downloads almost instantaneously, so the delay is not a network latency one (at least not to the CRL server).
One odd thing that I see with the network sniffer is that after the initial SSL certificate retrieval from the server, there is a five second delay until the CRL is downloaded.**
Has anyone got any suggestions as to what may be going on during this stage, and what the delay may be caused by?
Thanks!
UPDATE: OK, I've used reflector and a memory profiler to delve into. AuthenticateAsClient. It looks like most of the time is spent building the certificate chain, i.e.:
if (!CAPISafe.CertGetCertificateChain(hChainEngine, pCertContext, ref pTime, invalidHandle, ref cert_chain_para, dwFlags, IntPtr.Zero, ref ppChainContext))
If I don't request CRL validation, then this returns almost instantaneously, with CRL-checking enabled, about 4 seconds.
I suspect I'll see the same delay if I manually attempt to build the chain in my RemoteCertificateValidationCallback.
This wouldn't really be a problem if the CRL was cached, however it seems like this caching is not working on a Windows7 customer. Why?? Well I guess that's the next task...
Could anyone explain what could be causing the chain-building to take so long?
It seems that here is an answer for this question:
https://blogs.msdn.microsoft.com/alejacma/2011/09/27/big-delay-when-calling-sslstream-authenticateasclient/
Digging a bit further to understand why CertGetCertificateChain took
so long, I saw that we were trying to download the following file from
the Internet:
http://www.download.windowsupdate.com/msdownload/update/v3/static/trustedr/en/authrootstl.cab
Why were we downloading this file? Well, this will happen by default
on Windows when we build the chain of a cert which root CA cert is not
installed in the system. This is called the Automatic Root
Certificates Update feature, and it is available on Windows XP/Server
2003 and later OS versions, including Windows 7/Server 2008 R2.
I have a small C# solution used to check users credentials. It works fine for two of my teammates, but on my PC I get an exception.
The relevant code:
PrincipalContext context = new PrincipalContext(ContextType.Domain);
if (context.ValidateCredentials(System.Environment.UserDomainName + "\\" + usr, pwd))
return true;
else
return false;
And the exception is:
DirectoryOperationException, "The server cannot handle directory requests.".
I tried creating context with the explicit server name and the 636 port number, but this didn't help as well.
Any ideas?
I had this problem too using IIS Express and VS 2010. What fixed it for me was a comment on another thread.
Validate a username and password against Active Directory?
but i'll save you the click and search... :) Just add ContextOpations.Negotiate to you Validate Credentials call like below.
bool valid = context.ValidateCredentials(user, pass, ***ContextOptions.Negotiate***);
I had this issue: things were working on my dev machine but didn't work on the server. Turned out that IIS on the server was set up to run as LocalMachine. I changed it to NetworkService (the default) and things started working.
So basically check the user of the app pool if this is running on IIS.
I had to just create a new app pool and assign it .NET 2.0, then assign the new app pool to our web app, and it started working. We had .NET 3.5 SP2, so the hotfix wasn't ideal for us. Since the WWW service is usually Local System, I questioned that too. But since it was .NET and security related, I gave a shot at the app pool first and it worked.
Perhaps you need the hotfix?
FIX: DirectoryOperationException exception
And you are an Admin or the id that your service is running under is an Admin on your PC right?
I take it you already looked into this:
System.DirectoryServices.Protocols
"You may receive a less than helpful DirectoryOperationException(“The server cannot handle directory requests.”) what isn’t quite so amusing about this is that it didn’t even try to communicate with the server. The solution was to add the port number to the server. So instead of passing “Server” to open the LdapConnection, I passed “server:636”. By the way, LDAPS is port 636 – rather than the 389 port used by LDAP."
Good point, I wouldn't expect that Win7/.NET 3.5 would need that patch. How about the info provided in this question:
Setting user's password via System.DirectoryServices.Protocols in AD 2008 R2
Apologies in advance as I haven't had much experience with directories before.
I have an ASP.net application, and I have to validate its users against an Active Directory Application Mode instance running on Server 2k3. I was previously attempting a connection with DirectoryEntry and catching the COMException if the user's credentials (userPrincipalName & password) were wrong, but I had a number of problems when trying to bind as users who weren't a member of any ADAM groups (which is a requirement).
I recently found the System.DirectoryServices.AccountManagement library, which seems a lot more promising, but although it works on my local machine, I'm having some troubles when testing this in our testbed environment. Chances are I'm simply misunderstanding how to use these objects correctly, as I wasn't able to find any great documentation on the matter. Currently I am creating a PrincipalContext with a Windows username and password, then calling the AuthenticateCredentials with the user's userPrincipalName and password. Here's a very short exert of what I'm doing:
using (var serviceContext = new PrincipalContext(
ContextType.ApplicationDirectory,
serverAddress,
rootContainer,
ContextOptions.Negotiate | ContextOptions.SecureSocketLayer,
serviceAccountUsername,
serviceAccountPassword)) {
bool credentialsValid = serviceContext.ValidateCredentials(userID, password, ContextOptions.SecureSocketLayer | ContextOptions.SimpleBind)
}
If the user's credentials are valid, I then go on to perform other operations with that principal context. As I said, this works for both users with and without roles in my own environment, but not in our testbed environment. My old DirectoryEntry way of checking the user's credentials still works with the same configuration.
After a very long morning, I was able to figure out the problem!
The exception message I was receiving when calling ValidateCredentials was extremely vague. After installing Visual Studio 2008 in the test environment (which is on the other side of the country, mind you!), I was able to debug and retrieve the HRESULT of the error. After some very deep searching in to Google, I found some very vague comments about "SSL Warnings" being picked up as other exceptions, and that enabling "SCHANNEL logging" (which I'm very unfamiliar with!) might reveal some more insight. So, after switching that on in the registry and retrying the connection, I was presented with this:
The certificate received from the remote server does not contain the expected name. It is therefore not possible to determine whether we are connecting to the correct server. The server name we were expecting is ADAMServer. The SSL connection request has failed. The attached data contains the server certificate.
I found this rather strange, as the old method of connecting via SSL worked fine. In any case, my co-worker was able to spot the problem - the name on the SSL certificate that had been issued on the server was that of the DNS name ("adam2.net") and not the host name ("adamserver"). Although I'm told that's the norm, it just wasn't resolving the correct name when using PrincipalContext.
Long story short; re-issuing a certificate with the computer name and not the DNS name fixed the problem!