Working with MongoLab from .NET - permissions problems - c#

I've open an account on MongoLab (free sandbox account for testing). I have code that's running on my local MongoDb server and running just fine without any problems.
In MongoLab I've created a database and created a user for that database and placed the connection string they gave me in my application:
mongodb://<dbuser>:<dbpassword>#<my_id>.mlab.com:52408/<my_db>
I even tried this:
mongodb://<dbuser>:<dbpassword>#<my_id>.mlab.com:52408/<my_db>?authMode=scram-sha1&rm.tcpNoDelay=true
In my .NET code I'm connecting to my database:
MongoClient client = new MongoClient(MY_CONNECTION_STRING);
IMongoDatabase database = client.GetDatabase("MyTestDb");
_versionRulesCollection = database.GetCollection<VersionRule>("VersionRules");
_versionDetailsCollection = database.GetCollection<VersionDetails>("VersionDetails");
but when I try to do something like this:
_versionDetailsCollection.Indexes.CreateOneAsync(Builders<VersionDetails>.IndexKeys.Ascending(x => x.ProductName).Ascending(y => y.DeviceType).Ascending(z => z.VersionName))
I get an exception saying
System.AggregateException: One or more errors occurred. ---> MongoDB.Driver.MongoCommandException: Command createIndexes failed: not authorized on MyTestDb to execute command { createIndexes: "VersionDetails", indexes: [ { key: { ProductName: 1, DeviceType: 1, VersionName: 1 }, name: "ProductName_1_DeviceType_1_VersionName_1" } ] }
Even trying to do a query I got:
not authorized for query on MyTestDb .VersionDetails
I don't see any way to set permission for users on the portal. The database is created dynamically in my app. And I'm at a lost here.
What am I doing wrong (it works on my local machine)?

Related

Azure App Configuration Managed Identity failing when called from Azure Function

I'm following Use managed identities to access app configuration. For step 5 i've assigned my function FilterFunction as havingthe App Configuration Data Reader role:
The code for my function on startup is the following:
var appConfigEndpoint = Environment.GetEnvironmentVariable("Endpoint");
var environment = Environment.GetEnvironmentVariable("Environment");
var sentinelValue = Environment.GetEnvironmentVariable("ConfigSentinelKey");
builder.ConfigurationBuilder.AddAzureAppConfiguration(options =>
{
// Load the configuration using labels
options.Connect(new Uri(appConfigEndpoint), new ManagedIdentityCredential())
.ConfigureRefresh(refreshoptions => refreshoptions.Register(
key: sentinelValue,
label: environment,
true))
.Select(KeyFilter.Any, environment);
});
However when i publish my function to Azure i see the following error:
Why am i getting this error?
Once a role is assigned to grant access to Azure App Configuration. It may take up to ~15 minutes to propagate. During this time, the error message you provided will be observed.
This is especially true if the identity is first used to make a request without having the role assignment (resulting in 403) and then the role is added afterward.

LDAP search fails on server, not in Visual Studio

I'm creating a service to search for users in LDAP. This should be fairly straightforward and probably done a thousand times, but I cannot seem to break through properly. I thought I had it, but then I deployed this to IIS and it all fell apart.
The following is setup as environment variables:
ldapController
ldapPort
adminUsername 🡒 Definitely a different user than the error reports
adminPassword
baseDn
And read in through my Startup.Configure method.
EDIT I know they are available to IIS, because I returned them in a REST endpoint.
This is my code:
// Connect to LDAP
LdapConnection conn = new LdapConnection();
conn.Connect(ldapController, ldapPort);
conn.Bind(adminUsername, adminPassword);
// Run search
LdapSearchResults lsc = conn.Search(
baseDn,
LdapConnection.SCOPE_SUB,
lFilter,
new string[] { /* lots of attributes to fetch */ },
false
);
// List out entries
var entries = new List<UserDto>();
while (lsc.hasMore() && entries.Count < 10) {
LdapEntry ent = lsc.next(); // <--- THIS FAILS!
// ...
}
return entries;
As I said, when debugging this in visual studio, it all works fine. When deployed to IIS, the error is;
Login failed for user 'DOMAIN\IIS_SERVER$'
Why? The user specified in adminUsername should be the user used to login (through conn.Bind(adminUsername, adminPassword);), right? So why does it explode stating that the IIS user is the one doing the login?
EDIT I'm using Novell.Directory.Ldap.NETStandard
EDIT The 'user' specified in the error above, is actually NOT a user at all. It is the AD registered name of the computer running IIS... If that makes any difference at all.
UPDATE After consulting with colleagues, I set up a new application pool on IIS, and tried to run the application as a specified user instead of the default passthrough. Exactly the same error message regardless of which user I set.
Try going via Network credentials that allows you to specify domain:
var networkCredential = new NetworkCredential(userName, password, domain);
conn.Bind(networkCredential);
If that does not work, specify auth type basic (not sure that the default is) before the call to bind.
conn.AuthType = AuthType.Basic;

Why is my azure process not connecting to azure database?

I have a web app and a batch pool.
In the batch pool, created tasks are using the same database as the web app.
Today I started receiving the following exception in the batch:
A transport-level error has occurred when receiving results from the server. (provider: Session Provider, error: 19 - Physical connection is not usable)
The code base has not changed, older versions do not work, there were no updates, it just popped out of the blue. I repeated a couple tasks in a controlled debug environment in VS and they went through without any exceptions thrown. I went in and added the batch node’s IP to the sql server firewall rules, also no result. Meanwhile, the web application uses the database just fine.
Both the web app and batch pool are located in East US.
Here’s a snippet from Program.cs in my batch task:
MyEntities db; //MyEntities extends DbContext
System.Data.Entity.Core.EntityClient.EntityConnectionStringBuilder connstr = new System.Data.Entity.Core.EntityClient.EntityConnectionStringBuilder();
connstr.ProviderConnectionString = connectionString;
connstr.Provider = "System.Data.SqlClient";
connstr.Metadata = "res://*/MyEntities.csdl|res://*/MyEntities.ssdl|res://*/MyEntities.msl";
try {
db = new PepeEntities(connstr.ConnectionString);
}
The connection string looks like this:
Persist Security Info=True; Data Source=<host>; Initial Catalog=<database name>; Integrated Security=False; User ID=<login>; Password=<password>; MultipleActiveResultSets=True; Connect Timeout=30; Encrypt=True;
Edit:
This problem has subsided the same way it appeared: out of the blue. I’ll carry out tests whenever it surfaces again.
You can try one of these 2 possibilities:
1. Enabling an Execution Strategy:
public class MyEntitiesConfiguration : DbConfiguration
{
public MyEntitiesConfiguration()
{
SetExecutionStrategy("System.Data.SqlClient", () => new SqlAzureExecutionStrategy());
}
}
# please view more details here:https://msdn.microsoft.com/en-US/data/dn456835
2. if you have explicitly opened the connection, ensure that you close it. You can use an using statement:
using(var db = new PepeEntities(connstr.ConnectionString){
..do your work
}
https://blogs.msdn.microsoft.com/appfabriccat/2010/12/10/sql-azure-and-entity-framework-connection-fault-handling/

AppFabric - putting fine, getting times out?

After painfully trying to get my virtual environment up and running with Appfabric Caching (1.1), I am able to run 2 nodes into 1 cache cluster.
Both show system up which is good. Before, it was not and was a pain.
So I am now creating a demo app.
The app is being developed on the host computer which can connect to the virtual environment (using VMware and they are all in a domain except the host).
I can put things in the cache and I can see the cache statistics which reflects what I have put in the cache.
But when getting - it fails! It just times out and no idea why or where to go:
? u.Email
"36277#bloggs.com"
? CacheManager.Instance.Cache.GetCacheItem(u.Email)
'CacheManager.Instance.Cache.GetCacheItem(u.Email)' threw an exception of type 'Microsoft.ApplicationServer.Caching.DataCacheException'
base {System.Exception}: {"ErrorCode<ERRCA0018>:SubStatus<ES0001>:The request timed out.. Additional Information : The client was trying to communicate with the server : net.tcp://AppFabricTwo.appfabric.demo.com:22233"}
ErrorCode: 18
HelpLink: "http://go.microsoft.com/fwlink/?LinkId=164049"
Message: "ErrorCode<ERRCA0018>:SubStatus<ES0001>:The request timed out.. Additional Information : The client was trying to communicate with the server : net.tcp://AppFabricTwo.appfabric.demo.com:22233"
SubStatus: -1
TrackingId: {00000000-0000-0000-0000-000000000000}
I have AppFabricOne and AppFabricTwo. I can communicate between them no problems and I can ping and access these 2 from the HOST computer itself (which is hosting the VM's)
Any ideas why this would be and what to do? Windows firewalls on the VM computers are all disabled and these are joined to a domain (And using SQL).
My code:
Adding:
Random r = new Random();
int idChosen = r.Next(1, 99999);
User u = new User { LastName = "Bloggs", FirstName = "Joe", CellPhone = "(555) 555-5555", DOB = DateTime.Today.AddYears(-30), UserID = idChosen, Email = idChosen.ToString() + "#bloggs.com" };
DataCacheItemVersion item = CacheManager.Instance.Cache.Put(u.Email, u, this.txtRegion.Text);
Retrieving:
CacheManager.Instance.Cache.GetCacheItem(u.Email)
yes, I have also tried GetRegionItem but that still gives me the same error as GetCacheItem.
Are you using the same DataCacheFactory object for getting the cache items as the one you are using to put items in the cache ? The fact that PUT works and GET doesn't makes me think that they are different datacachefactory objects somehow.
also are you able to ping to FQDN AppFabricTwo.appfabric.demo from your client machine and is it resolving to correct IP address I.e. Same as appfabrictwo ? Also check telnet to port 22233 is working from your client (if put works this should work anyways though

Using SMO.Agent to retrieve SQL job execution status - security issue

I've got a C# program that fires off SQL Server Agent jobs using the SQL Server Management Objects (SMO) interfaces. It looks something like:
Server ssis_server = new Server(
new ServerConnection(SERVER_NAME, SERVER_USERNAME, SERVER_PASSWORD)
);
var agent = ssis_server.JobServer;
var ssis_job = agent.Jobs[job_name];
var current_status = ssis_job.CurrentRunStatus;
if (current_status == JobExecutionStatus.Idle)
{
ssis_job.Start();
OnSuccess("Job started: " + job_name);
}
else
{
OnError("Job is already running or is not ready.");
}
I'm using SQL Server Authentication at this point to simplfy things whilst I work this out.
Now, my problem is that unless the SERVER_USERNAME is part of the 'sysadmin' dbo role, ssis_job.CurrentRunStatus is always 'Idle' - even when I know the job is running. It doesn't error out, just always reports idle.
If the user is an administrator, then the status is returned as expected.
Role membership you say?
Well, I added the SERVER_USERNAME SQL Server login to the msdb Role SQLAgentOperatorRole, that didn't seem to help.
The job's owner is a system administrator account - if that's the issue I'm not sure how to work around it.
Any ideas?
You need to refresh the job by calling the Refresh() method on ssis_job before checking the status, then you will get the correct information.

Categories