I've got a C# program that fires off SQL Server Agent jobs using the SQL Server Management Objects (SMO) interfaces. It looks something like:
Server ssis_server = new Server(
new ServerConnection(SERVER_NAME, SERVER_USERNAME, SERVER_PASSWORD)
);
var agent = ssis_server.JobServer;
var ssis_job = agent.Jobs[job_name];
var current_status = ssis_job.CurrentRunStatus;
if (current_status == JobExecutionStatus.Idle)
{
ssis_job.Start();
OnSuccess("Job started: " + job_name);
}
else
{
OnError("Job is already running or is not ready.");
}
I'm using SQL Server Authentication at this point to simplfy things whilst I work this out.
Now, my problem is that unless the SERVER_USERNAME is part of the 'sysadmin' dbo role, ssis_job.CurrentRunStatus is always 'Idle' - even when I know the job is running. It doesn't error out, just always reports idle.
If the user is an administrator, then the status is returned as expected.
Role membership you say?
Well, I added the SERVER_USERNAME SQL Server login to the msdb Role SQLAgentOperatorRole, that didn't seem to help.
The job's owner is a system administrator account - if that's the issue I'm not sure how to work around it.
Any ideas?
You need to refresh the job by calling the Refresh() method on ssis_job before checking the status, then you will get the correct information.
Related
I'm new to Realm Sync (and Realm). I'm trying to convert a REST / SQL Server system to Realm Sync (to avoid having to write my own local-device caching code).
I got a simple configuration working, with a single API-key user and the null partition, read and write permissions just set to true.
But for my more complex application, I want smaller sub-partitions to reduce the amount of data that needs to be cached on local devices, and I want the sub-partitions to be able to be created dynamically by the client. Ideally, I would like to allow an API-key user to connect to any partition whose name starts with their user id (or some other known string, e.g. the profile name). But I can't find a way to get a "starts with" condition into the permissions.
My best attempt was to try setting Read and Write sync permissions to:
{
"%%partition": {
"$regex": "^%%user.id"
}
}
but my client just fails to connect, saying Permission denied (BIND, REFRESH). (Yes, I tried using "$regex": /^%%user.id/ but the Realm UI rejected that syntax.) The Realm Sync log says "user does not have permission to sync on partition (ProtocolErrorCode=206)".
As you can see in the log image, the partition name was equal to the user id for this test.
Is what I'm trying to do possible? If so, how do I set up the Sync Permissions to make it work?
This can be done using a function. If, like me, you're new to Realm Sync and not fluent in Javascript, don't worry - it turns out to be not too hard to do, after all. (Thanks Jay for encouraging me to try it!)
I followed the instructions on the Define a Function page to create my userCanAccessPartition function like this:
exports = function(partition){
return partition.startsWith(context.user.id);
};
Then I set my sync permissions to:
{
"%%true": {
"%function": {
"name": "userCanAccessPartition",
"arguments": ["%%partition"]
}
}
}
I'm creating a service to search for users in LDAP. This should be fairly straightforward and probably done a thousand times, but I cannot seem to break through properly. I thought I had it, but then I deployed this to IIS and it all fell apart.
The following is setup as environment variables:
ldapController
ldapPort
adminUsername 🡒 Definitely a different user than the error reports
adminPassword
baseDn
And read in through my Startup.Configure method.
EDIT I know they are available to IIS, because I returned them in a REST endpoint.
This is my code:
// Connect to LDAP
LdapConnection conn = new LdapConnection();
conn.Connect(ldapController, ldapPort);
conn.Bind(adminUsername, adminPassword);
// Run search
LdapSearchResults lsc = conn.Search(
baseDn,
LdapConnection.SCOPE_SUB,
lFilter,
new string[] { /* lots of attributes to fetch */ },
false
);
// List out entries
var entries = new List<UserDto>();
while (lsc.hasMore() && entries.Count < 10) {
LdapEntry ent = lsc.next(); // <--- THIS FAILS!
// ...
}
return entries;
As I said, when debugging this in visual studio, it all works fine. When deployed to IIS, the error is;
Login failed for user 'DOMAIN\IIS_SERVER$'
Why? The user specified in adminUsername should be the user used to login (through conn.Bind(adminUsername, adminPassword);), right? So why does it explode stating that the IIS user is the one doing the login?
EDIT I'm using Novell.Directory.Ldap.NETStandard
EDIT The 'user' specified in the error above, is actually NOT a user at all. It is the AD registered name of the computer running IIS... If that makes any difference at all.
UPDATE After consulting with colleagues, I set up a new application pool on IIS, and tried to run the application as a specified user instead of the default passthrough. Exactly the same error message regardless of which user I set.
Try going via Network credentials that allows you to specify domain:
var networkCredential = new NetworkCredential(userName, password, domain);
conn.Bind(networkCredential);
If that does not work, specify auth type basic (not sure that the default is) before the call to bind.
conn.AuthType = AuthType.Basic;
I'm on a project using .NET 4.5, MVC, EF 6
I had naively implemented a caching system using the HttpRuntime cache and needed to invalidate the data I cache on updates to that data; except I forgot to take into account that our production server is published to a load balanced set of two servers... :|
So on production, after the data was updated, the app would sometimes serve the right data, and sometimes the old data depending on which server the request was hitting. Bad news bears.
So I decided to define a dependency on the SQL table AcademicTerms which is where my data is coming from. But I did something wrong, and I'm not sure what.
SQL that I ran to set up the permissions after enabling the Service Broker
EXEC sp_addrole 'sql_dependency_role'
GRANT CREATE PROCEDURE to sql_dependency_role
GRANT CREATE QUEUE to sql_dependency_role
GRANT CREATE SERVICE to sql_dependency_role
GRANT REFERENCES on
CONTRACT::[http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification]
to sql_dependency_role
GRANT VIEW DEFINITION TO sql_dependency_role
GRANT SELECT to sql_dependency_role
GRANT SUBSCRIBE QUERY NOTIFICATIONS TO sql_dependency_role
GRANT RECEIVE ON QueryNotificationErrorsQueue TO sql_dependency_role
EXEC sp_addrolemember 'sql_dependency_role', 'MY_ASPNET_APP_USERNAME'
My implementation of inserting new data after fetching and thus setting up the SqlDependency (hopefully less naive!):
private void insertIntoCache(
AcademicTermLockingInfo newItem,
string itemKey,
Guid termID) {
var dbContextConnection = db.Database.Connection;
var connectionString = dbContextConnection.ConnectionString;
// important step otherwise it won't work
SqlDependency.Start(connectionString);
CacheItemPolicy policy = new CacheItemPolicy {
AbsoluteExpiration = DateTime.UtcNow.AddMonths(6)
};
CacheItem item = new CacheItem(itemKey, newItem);
using (SqlConnection connection = new SqlConnection(connectionString)) {
connection.Open();
// command which will be used to notify updates - probably want to parametrize this
using (SqlCommand command =
new SqlCommand(
String.Format("SELECT Name, LockDate FROM dbo.AcademicTerms WHERE ID = '{0}'",
termID),
connection)) {
SqlDependency dependency = new SqlDependency(command);
SqlChangeMonitor monitor = new SqlChangeMonitor(dependency);
policy.ChangeMonitors.Add(monitor);
MemoryCache.Default.Set(item, policy);
// execute once otherwise dependency not registered
command.ExecuteNonQuery();
}
}
}
Any help would be very much appreciated!
Things I've done:
Created two new users in SQL Server, net and sanba
Added every NT* login and the sa login to the net user, added net to the sql_dependency_role
Ran grant alter on schema::sql_dependency_role to net and grant alter on schema::dbo to net
Check that my local SQL Server's Broker Enabled option is True under Service Broker
Tried the web cache and the Memory Cache interchangeably (probably wouldn't change anything)
Tried making the sql command string have a fully qualified name DevUMS.dbo.AcademicTerms and dbo.AcademicTerms
I queried the sys.dm_qn_subscriptions and saw I had one subscription, good!
I queried DevUMS.sys.transmission_queue and found an excpetion!
An exception occurred while enqueueing a message in the target
queue. Error: 15517, State: 1. Cannot execute as the database
principal because the principal "dbo" does not exist, this type of
principal cannot be impersonated, or you do not have permission.
I found this SO post with the same error
The secret sauce I was missing was alter authorization on database::DevUMS to [sa]; which I found on the linked SO post's answer.
There are a number of other steps, like adding a Role to use the appropriate login, but honestly, I'm really unsure as to whether or not those are actually necessary.
I'm going to publish a little later on today, and then I'll try to do the minimal amount of steps and document that here. I found the documentation in the wild to be very scattered and poor, so I hope to have this answer be a definitive place to refer to in the future
In ASP.NET, we have an application that prompts users for their AD credentials using Basic Authentication and ASP.NET Impersonation.
The application then connects to SQL Server with the following connection string
SqlClient.SqlConnection cnn = new SqlClient.SqlConnection();
cnn.ConnectionString =
"Server=" + MenuServer + ";" +
"Database= " + MenuDatabase + ";" +
"Trusted_Connection=Yes;" +
"Pooling = False;";
cnn.Open();
99% of the time, this passes along the user context just fine, but sporadically we get the following error from SQL Server:
Login failed for user 'WebServerName$'. Reason: Could not find a login matching the name provided
Meaning that it has been unable to pass along the user credentials and instead defaulted to the IIS worker process. Interestingly, when we catch the error, the current user will still be accurately recorded with the following code:
string userName = Environment.UserName;
Questions:
What user context does the Integrated Security pass along to the database?
Is it possible to programmatically check the user before calling cnn.Open() to confirm we have a real user?
This can happen if you are using ORM Mappers, e.g. Entity Framework or NHibernate (I have experuienced this with NHibernate). It is because the mapping framework uses Lazy Evaluation: it doesn't get all the data from the database until you ask for it. For example, if you are getting a list of objects each of which contains a further list of objects (e.g. Customer contains a list of Invoice), it won't get the Invoice objects until you actually use them.
If you don't use then until binding them to an aspx control and you do that in the aspx page (not in e.g. the PageLoad event), then the life-cycle of an aspx page means that it looks at the sub-list after your code has run and any impersonation you are doing has ended - this under the IIS account.
You can prevent this happening by either preventing lazy evaluation (generally not a good idea), or by making sure that you touch each of the lists you'll need the page to use in your code, under your impersonated account.
for each (Customer c in customers)
{
int i = c.Invoices.Count; // make sure that they do get retrieved.
}
After painfully trying to get my virtual environment up and running with Appfabric Caching (1.1), I am able to run 2 nodes into 1 cache cluster.
Both show system up which is good. Before, it was not and was a pain.
So I am now creating a demo app.
The app is being developed on the host computer which can connect to the virtual environment (using VMware and they are all in a domain except the host).
I can put things in the cache and I can see the cache statistics which reflects what I have put in the cache.
But when getting - it fails! It just times out and no idea why or where to go:
? u.Email
"36277#bloggs.com"
? CacheManager.Instance.Cache.GetCacheItem(u.Email)
'CacheManager.Instance.Cache.GetCacheItem(u.Email)' threw an exception of type 'Microsoft.ApplicationServer.Caching.DataCacheException'
base {System.Exception}: {"ErrorCode<ERRCA0018>:SubStatus<ES0001>:The request timed out.. Additional Information : The client was trying to communicate with the server : net.tcp://AppFabricTwo.appfabric.demo.com:22233"}
ErrorCode: 18
HelpLink: "http://go.microsoft.com/fwlink/?LinkId=164049"
Message: "ErrorCode<ERRCA0018>:SubStatus<ES0001>:The request timed out.. Additional Information : The client was trying to communicate with the server : net.tcp://AppFabricTwo.appfabric.demo.com:22233"
SubStatus: -1
TrackingId: {00000000-0000-0000-0000-000000000000}
I have AppFabricOne and AppFabricTwo. I can communicate between them no problems and I can ping and access these 2 from the HOST computer itself (which is hosting the VM's)
Any ideas why this would be and what to do? Windows firewalls on the VM computers are all disabled and these are joined to a domain (And using SQL).
My code:
Adding:
Random r = new Random();
int idChosen = r.Next(1, 99999);
User u = new User { LastName = "Bloggs", FirstName = "Joe", CellPhone = "(555) 555-5555", DOB = DateTime.Today.AddYears(-30), UserID = idChosen, Email = idChosen.ToString() + "#bloggs.com" };
DataCacheItemVersion item = CacheManager.Instance.Cache.Put(u.Email, u, this.txtRegion.Text);
Retrieving:
CacheManager.Instance.Cache.GetCacheItem(u.Email)
yes, I have also tried GetRegionItem but that still gives me the same error as GetCacheItem.
Are you using the same DataCacheFactory object for getting the cache items as the one you are using to put items in the cache ? The fact that PUT works and GET doesn't makes me think that they are different datacachefactory objects somehow.
also are you able to ping to FQDN AppFabricTwo.appfabric.demo from your client machine and is it resolving to correct IP address I.e. Same as appfabrictwo ? Also check telnet to port 22233 is working from your client (if put works this should work anyways though