I'm migrating over from Couchbase Server 4.x to Couchbase Server 5.x. I understand that there is a new user-based authentication. But it's not clear to me how I should be using ClusterHelper since there is no longer a password for the bucket.
Previously:
// at app startup
ClusterHelper.Initialize(new ClientDefinition { ... });
// later in DAL
var cluster = ClusterHelper.GetBucket("bucketname", "bucketpassword");
But now, using 5.x, there's no longer a bucket password. Where do I enter the user credentials?
There is a new overload to ClusterHelper.Initialize with an IAuthenticator parameter.
So now, for example:
// at app startup
ClusterHelper.Initialize(new ClientDefinition { ... }, new PasswordAuthenticator("username", "password"));
// then later in the your data access:
var cluster = ClusterHelper.GetBucket("bucketname");
Here's the overload in the ClusterHelper.cs
If you were using Web.config previously, you may also have to change that. The password is no longer on the <bucket /> node, it's now on the <couchbase /> node. See this answer in the forums.
Related
New question
Update: since this started with some general challenges and has since zeroed in on a more specific issue I've re-posted as a new question here.
I have been following Microsoft's advice for sharing an authentication cookie issued by an ASP.NET web app with a separate dotnet core web app running on the same domain. Unfortunately the dotnet core app is not unprotecting the cookie as expected and I'm struggling to diagnose why.
I'll try to simplify what I've done. Before I do I should point out that both apps will run under the same path - let's call it mydomain.com/path - so the auth cookie will be scoped to that path. There's a lot of additional complexity because I'm actually trying to wire this into an old OIDC library, but I think the main issue I'm having is on the other side where I have a fairly lightweight dotnet core app trying to use the same session.
First, in my original .NET app (it's 4.7.2) I'm creating a new data protector using the Microsoft.Owin.Security.Interop library:
var appName = "<my-app-name>";
var encryptionSettings = new AuthenticatedEncryptionSettings()
{
EncryptionAlgorithm = EncryptionAlgorithm.AES_256_CBC,
ValidationAlgorithm = ValidationAlgorithm.HMACSHA256
};
var interopProvider = DataProtectionProvider.Create(
new DirectoryInfo(keyRingSharePath),
builder =>
{
builder.SetApplicationName(appName);
builder.SetDefaultKeyLifetime(TimeSpan.FromDays(365 * 20));
builder.UseCryptographicAlgorithms(encryptionSettings);
if (!generateNewKey)
{
builder.DisableAutomaticKeyGeneration();
}
});
return new DataProtectorShim(
interopProvider.CreateProtector(
"Microsoft.AspNetCore.Authentication.Cookies.CookieAuthenticationMiddleware",
appName,
"v2"));
Note that <my-app-name> is also the name of the cookie, as set in the CookieAuthenticationOptions.
keyRingSharePath is for now just a local path on my PC. The first time I run this I have generateNewKey set to true to ensure a new key is generated at this path. Thereafter I leave this false to ensure that key is re-used.
I also assign the ticket data format using this data protector as per the docs: new TicketDataFormat(dataProtector).
This works as expected in that authentication still works and I can even verify the data protection by using an instance of the TicketDataFormat created above and calling its Unprotect method with the auth cookie value and getting a ClaimsIdentity back.
Next I've created a simple dotnet core app which runs on the same domain as the above app. In the Startup I've added this:
var primaryAuthenticationType = "<my-app-name>";
var cookieName = primaryAuthenticationType;
services.AddDataProtection()
.PersistKeysToFileSystem(new DirectoryInfo(keyRingSharePath))
.SetDefaultKeyLifetime(TimeSpan.FromDays(365 * 20))
.DisableAutomaticKeyGeneration()
.UseCryptographicAlgorithms(new AuthenticatedEncryptorConfiguration()
{
EncryptionAlgorithm = EncryptionAlgorithm.AES_256_CBC,
ValidationAlgorithm = ValidationAlgorithm.HMACSHA256
})
.SetApplicationName(primaryAuthenticationType);
services.ConfigureApplicationCookie(options => {
options.Cookie.Name = cookieName;
options.Cookie.Path = "/path";
});
Obviously keyRingSharePath holds the same value as in the ASP.NET app. I also have the following in the ConfigureServices method in Startup:
app.UseAuthentication();
app.UseAuthorization();
Having signed in using the ASP.NET app I then switch to my dotnet core app. But unfortunately when debugging any controller with a route under /path I find that User.Identity.IsAuthenticated is false.
I've also tried unprotecting the cookie manually like this, using an injected instance of IDataProtectionProvider:
var protector = protectionProvider.CreateProtector(
"Microsoft.AspNetCore.Authentication.Cookies.CookieAuthenticationMiddleware",
"<my-app-name>",
"v2");
var ticketDataFormat = new TicketDataFormat(dataProtector);
var ticket = ticketDataFormat.Unprotect("<auth-cookie-value>");
return ticket?.Principal;
However, ticket is assigned null and I can't find any way to debug why it won't unprotect the cookie value. As far as I can tell this should use the same logic that my ASP.NET app used when I confirmed that I could unprotect that cookie.
Any guidance would be much appreciated.
UPDATE 1
I've been playing around a bit more by trying to deconstruct the code that unprotects the cookie. I've added the following code to a controller on my dotnet core app:
var formatVersion = 3;
var protector = _dataProtectionProvider.CreateProtector("Microsoft.AspNetCore.Authentication.Cookies.CookieAuthenticationMiddleware", "<my-app-name>", "v2");
var cookieValue = Request.Cookies["<my-app-name>"];
var cookieValueDecoded = Base64UrlTextEncoder.Decode(cookieValue);
var unprotectedBytes = protector.Unprotect(cookieValueDecoded);
using (MemoryStream memoryStream = new MemoryStream(unprotectedBytes))
{
using (GZipStream gzipStream = new GZipStream((Stream)memoryStream, CompressionMode.Decompress))
{
using (BinaryReader reader = new BinaryReader((Stream) gzipStream))
{
if (reader.ReadInt32() != formatVersion) return (AuthenticationTicket) null;
string authenticationType = reader.ReadString();
string str1 = ReadWithDefault(reader, "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name");
string roleType = ReadWithDefault(reader, "http://schemas.microsoft.com/ws/2008/06/identity/claims/role");
int length = reader.ReadInt32();
Claim[] claimArray = new Claim[length];
for (int index = 0; index != length; ++index)
{
string type = ReadWithDefault(reader, str1);
string str2 = reader.ReadString();
string valueType = ReadWithDefault(reader, "http://www.w3.org/2001/XMLSchema#string");
string str3 = ReadWithDefault(reader, "LOCAL AUTHORITY");
string originalIssuer = ReadWithDefault(reader, str3);
claimArray[index] = new Claim(type, str2, valueType, str3, originalIssuer);
}
ClaimsIdentity identity = new ClaimsIdentity((IEnumerable<Claim>)claimArray, authenticationType, str1, roleType);
}
}
}
Most of this code comes from Microsoft.Owin.Security.DataHandler.Serializer.TicketSerializer in Microsoft.Owin.Security, Version=3.0.1. It is therefore essentially a reversal of the protection logic that is used in the originating ASP.NET app, and it works fine. It ends with a ClaimsIdentity that matches the account which authenticated on the other app. This tells me that the cryptography config and keys are matched between the apps.
So there must be some other difference between the built-in code that unprotects the authentication cookie on both sides. But I'm unclear about how to diagnose where the difference is. My assumption is that I've missed something on the dotnet core side which makes the cookie authentication interoperable.
UPDATE 2
Having dug around a bit more I think this comes down to the data serializer format version. In my dotnet core app if I dig into the TicketDataFormat I see it uses TicketSerializer.Default which is an implementation of IDataSerializer<AuthenticationTicket> that has a hard-coded FormatVersion of 5. There's also a comment at the top of TicketSerializer saying:
This MUST be kept in sync with Microsoft.Owin.Security.Interop.AspNetTicketSerializer
However you can see in my UPDATE 1 above that when I ripped out some of the serialization logic from the ASP.NET web app, it is working with a format version of 3. Note that this app is using the version of TicketDataFormat that comes with Microsoft.Owin.Security, Version=3.0.1.0 and that assembly has a TicketSerializer with hard-coded FormatVersion of 3.
So, how can I keep ensure these serializers are compatible on both sides?
UPDATE 3
Realised I'm a total tool and was missing a key part of the Microsoft docs. Above I state this:
I also assign the ticket data format using this data protector as per the docs: new TicketDataFormat(dataProtector).
Well, actually I should have been using the AspNetTicketDataFormat type provided by the Microsoft.Owin.Security.Interop library. Having corrected this I can now obtain a claim principal in my dotnet core app using the following:
var dataProtector = _dataProtectionProvider.CreateProtector("Microsoft.AspNetCore.Authentication.Cookies.CookieAuthenticationMiddleware", "<my-app-name>", "v2");
var ticketDataFormat = new TicketDataFormat(dataProtector);
var ticket = ticketDataFormat.Unprotect(cookieValue, "");
Here I can see ticket.Principal.Identity populated with my identity from the cookie.
However, I still can't get my app in an authenticated state. I'm clearly still not wiring up the cookie authentication middleware correctly. My Startup still just looks like the second code block in my original post. Feels like the final hurdle if anyone can help.
Eventually by the time I reached Update 3 I had been working on this long enough that the session had expired but I was still using that session to test. So the reason my cookie was being rejected was due to session expiry, not some coding issue. I discovered this once I added trace-level logging to the app and there was the answer, staring at me from the console! Will leave this here in case the process described in the post benefits anyone else.
var client = new AmazonCognitoIdentityProviderClient("MYKEY", "MYSECRET", RegionEndpoint.USEast1);
var request = new AdminGetUserRequest();
request.Username = "USERNAME";
request.UserPoolId = "POOLID";
var user = client.AdminGetUserAsync(request).Result;
The key/secret are authenticating as a user with Administrator Access. For good measure, I've also given it the AmazonCognitoPowerUser policy.
The region endpoint is correct and the same as the one my user pool is in. The user pool Id is correct. The first part of the user pool ID matches the region.
I'm at a loss for where else this could possibly be going wrong. Any ideas?
Update 8/2/19
Manual CLI command:
PM> aws cognito-idp list-user-pools --region us-east-1 --max-results 10
{
"UserPools": []
}
The region is correct, so there must be some issue with permissions. Is there anything I could try tweaking on the pool, or other policies I may need to add to the user?
So, it looks like this is some sort of AWS glitch with the existing IAM user.
Having created a new user with exactly the same permissions, access works as intended both from CLI and the code in the original question.
Actually your configuration can be wrong , you downloaded awsconfiguration.json and it looks like same I know.. but this configuration can be wrong. When you examine the json you will see a field.. "CognitoUserPool": {PoolId, appclient id ..}
You need to open your user pool and create new client or control existing client information. Check your awsconfiguration.json again with this webpage's pool id, appclient id etc. Update your json... it will solve the problem.
I ran into this problem with the AWS CLI and it puzzled me too, but I learned that I needed to provide the profile name in the parameter list to get it to work. So it looked like this:
aws cognito-idp admin-get-user --profile dev-account ....
My profiles are stored on my Mac at cat ~/.aws/config| grep profile
The config file is created by an in-house custom script. This is the contents of what that file looks like.
[profile dev-account]
sso_start_url = https://yourcompanyname.awsapps.com/start#/
sso_region = us-east-1
sso_account_id = 1234567890
sso_role_name = PowerUserAccess
region = us-east-1
output = json
Also, in this folder is a "credentials" file that has some JSON for these variables: profile name, aws_access_key_id, aws_secret_access_key, aws_session_token, aws_expiration
So far all the examples of using Google Cloud Firestore with .net show that you connect to your Firestore db by using this command:
FirestoreDb db = FirestoreDb.Create(projectId);
But is this skipping the step of authentication? I can't seem to find an example of wiring it up to use a Google service account. I'm guessing you need to connect using the service account's private_key/private_key_id/client_email?
You can also use the credentials stored in a json file:
GoogleCredential cred = GoogleCredential.FromFile("credentials.json");
Channel channel = new Channel(FirestoreClient.DefaultEndpoint.Host,
FirestoreClient.DefaultEndpoint.Port,
cred.ToChannelCredentials());
FirestoreClient client = FirestoreClient.Create(channel);
FirestoreDb db = FirestoreDb.Create("my-project", client);
I could not compile #Michael Bleterman's code, however the following worked for me:
using Google.Cloud.Firestore;
using Google.Cloud.Firestore.V1;
var jsonString = File.ReadAllText(_keyFilepath);
var builder = new FirestoreClientBuilder {JsonCredentials = jsonString};
FirestoreDb db = FirestoreDb.Create(_projectId, builder.Build());
Packages I use:
<PackageReference Include="Google.Cloud.Firestore" Version="2.0.0-beta02" />
<PackageReference Include="Google.Cloud.Storage.V1" Version="2.5.0" />
But is this skipping the step of authentication?
No. It will use the default application credentials. If you're running on Google Cloud Platform (AppEngine, GCE or GKE), they will just be the default service account credentials for the instance. Otherwise, you should set the GOOGLE_APPLICATION_CREDENTIALS environment variable to refer to a service account credential file.
From the home page of the user guide you referred to:
When running on Google Cloud Platform, no action needs to be taken to authenticate.
Otherwise, the simplest way of authenticating your API calls is to download a service account JSON file then set the GOOGLE_APPLICATION_CREDENTIALS environment variable to refer to it. The credentials will automatically be used to authenticate. See the Getting Started With Authentication guide for more details.
It's somewhat more awkward to use non-default credentials; this recent issue gives an example.
This worked for me.
https://pieterdlinde.medium.com/netcore-and-cloud-firestore-94628943eb3c
string filepath = "/Users/user/Downloads/user-a4166-firebase-adminsdk-ivk8q-d072fdf334.json";
Environment.SetEnvironmentVariable("GOOGLE_APPLICATION_CREDENTIALS", filepath);
fireStoreDb = FirestoreDb.Create("user-a4166");
The simplest way:
Get service account json file and hardcode values into a class:
public class FirebaseSettings
{
[JsonPropertyName("project_id")]
public string ProjectId => "that-rug-really-tied-the-room-together-72daa";
[JsonPropertyName("private_key_id")]
public string PrivateKeyId => "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
// ... and so on
}
Add it Startup.cs
var firebaseJson = JsonSerializer.Serialize(new FirebaseSettings());
services.AddSingleton(_ => new FirestoreProvider(
new FirestoreDbBuilder
{
ProjectId = firebaseSettings.ProjectId,
JsonCredentials = firebaseJson // <-- service account json file
}.Build()
));
Add wrapper FirebaseProvider
public class FirestoreProvider
{
private readonly FirestoreDb _fireStoreDb = null!;
public FirestoreProvider(FirestoreDb fireStoreDb)
{
_fireStoreDb = fireStoreDb;
}
// ... your methods here
}
Here is a full example of a generic provider.
https://dev.to/kedzior_io/simple-net-core-and-cloud-firestore-setup-1pf9
I'm creating a service to search for users in LDAP. This should be fairly straightforward and probably done a thousand times, but I cannot seem to break through properly. I thought I had it, but then I deployed this to IIS and it all fell apart.
The following is setup as environment variables:
ldapController
ldapPort
adminUsername 🡒 Definitely a different user than the error reports
adminPassword
baseDn
And read in through my Startup.Configure method.
EDIT I know they are available to IIS, because I returned them in a REST endpoint.
This is my code:
// Connect to LDAP
LdapConnection conn = new LdapConnection();
conn.Connect(ldapController, ldapPort);
conn.Bind(adminUsername, adminPassword);
// Run search
LdapSearchResults lsc = conn.Search(
baseDn,
LdapConnection.SCOPE_SUB,
lFilter,
new string[] { /* lots of attributes to fetch */ },
false
);
// List out entries
var entries = new List<UserDto>();
while (lsc.hasMore() && entries.Count < 10) {
LdapEntry ent = lsc.next(); // <--- THIS FAILS!
// ...
}
return entries;
As I said, when debugging this in visual studio, it all works fine. When deployed to IIS, the error is;
Login failed for user 'DOMAIN\IIS_SERVER$'
Why? The user specified in adminUsername should be the user used to login (through conn.Bind(adminUsername, adminPassword);), right? So why does it explode stating that the IIS user is the one doing the login?
EDIT I'm using Novell.Directory.Ldap.NETStandard
EDIT The 'user' specified in the error above, is actually NOT a user at all. It is the AD registered name of the computer running IIS... If that makes any difference at all.
UPDATE After consulting with colleagues, I set up a new application pool on IIS, and tried to run the application as a specified user instead of the default passthrough. Exactly the same error message regardless of which user I set.
Try going via Network credentials that allows you to specify domain:
var networkCredential = new NetworkCredential(userName, password, domain);
conn.Bind(networkCredential);
If that does not work, specify auth type basic (not sure that the default is) before the call to bind.
conn.AuthType = AuthType.Basic;
I am building an integration between my organization back-end systems and BOX.
One of the scenarios is that when certain event is happening inside my organization there is a need to create a folder in BOX and add collaboration objects to that folder (connect groups to the folder).
I have no problem to create the folder but when trying to create the collaboration I am getting the following error:
Box.V2.Exceptions.BoxException: Bearer realm="Service", error="insufficient_scope", error_description="The request requires higher privileges than provided by the access token."
I am using BOX SDK for .Net to interact with BOX.
The application I created in BOX is assigned to use AppUser User Type and I provided all the scopes that BOX allows me (All scopes except "Manage enterprise" which is disabled).
The code that fails is (C#):
var privateKey = File.ReadAllText(Settings.JwtPrivateKeyFile);
var boxConfig = new BoxConfig(Settings.ClientID, Settings.ClientSecret, Settings.EnterpriseID, privateKey, Settings.JwtPrivateKeyPassword, Settings.JwtPublicKeyID);
var jwt = BoxJWTAuth(boxConfig);
var token = jwt.AdminToken();
var client = jwt.AdminClient(token);
var addRequest = new BoxCollaborationRequest(){
Item = new BoxRequestEntity() {
Id = folderId,
Type = BoxType.folder
},
AccessibleBy = new BoxCollaborationUserRequest(){
Type = BoxType.#group,
Id = groupId
},
Role = "viewer"
};
var api = client.CollaborationsManager;
var task = api.AddCollaborationAsync(addRequest);
task.Wait();
When running this code but replacing the Admin Token with Developer Token generated from the Box Applicaiton Edit Page it works.
Any help is appreciated
OK, I had long discussion with BOX Technical team and here is the conclusion: Using AppUser is not the right choice for my scenario because it is limited only to the folders it creates. There is no way to bypass it.
The solution is:
1. Configure the Application to use standard user
2. Create User with administrative rights that will be used by the API to do activities in BOX. I named this user "API User"
3. Follow the oAuth 2 tutorial to generate access token and refresh token that the API .Net application can use instead of generating token for the AppUser. the oAuth 2 tutorial can be found at https://www.box.com/blog/get-box-access-tokens-in-2-quick-steps/
If the app user is a member of the group(s) you want to be able to access the folder then you shouldn't need to set up a collaboration, the users should just have access.