LDAP - What to use instead of DirSyncRequestControl - c#

I'm migrating some code away from Active Directory re-writing all directory requests to reference classes in System.Directory.Protocols and be LDAP v3 compliant. This is supposed to be a low level v3 LDAP namespace so assumed it wouldn't be polluted with AD specific types. The following code is from a monitor background worker that was already using the System.Directory.Protocols namespace. It opens an async long running request to AD and listens for changes using the Control DirSyncRequestControl.
SearchRequest request = new SearchRequest(
mDNSearchRoot,
mLdapFilter,
SearchScope.Subtree,
mAttrsToWatch
);
request.Controls.Add(
new DirSyncRequestControl(
mCookie,
mDirSyncOptions
)
);
mConn.BeginSendRequest(
request,
mRequestTimeout,
PartialResultProcessing.NoPartialResultSupport,
endPollDirectory,
null
);
It sends a cookie as a byte[] that tells the directory when to start querying from which is handy in case the background worker crashes and needs a restart later. In the endPollDirectory callback an update cookie is received and persisted immediately to the filesystem in the event of a restart being needed we always know when we last received results from. That cookie is loaded on restart and passed back with the DirSyncRequestControl.
The issue I'm facing is that DirSyncRequestControl is operating against an OID which specifically is an Active Directory extension, not standard LDAP. Our corporate directory is on IBM based LDAP and can't have AD OIDs and Controls applied. Standard LDAP supports "Persistent Search" 2.16.840.1.113730.3.4.3 but .NET doesn't provide a Control that could be added as in the above code. There's also no way to pass arguments like a cookie. The idea with the Persistent Search control is that you open the connection and as time passes the LDAP server sends changes back which I could response to. But on initiating the connection there's no way to specify when to returns results from, only results since the request was started will be received. If the monitor were to die and a directory change happened before the monitor could restart those changes could be neve be handled.
Does anyone know if there's an existing Control compliant with standard LDAP that could be added to the request which operates the way the AD specific DirSyncRequestControl does where a start date time could be passed?

Does anyone know if there's an existing Control compliant with standard LDAP that could be added to the request which operates the way the AD specific DirSyncRequestControl does where a start date time could be passed?
Standard would be the 1.3.6.1.4.1.4203.1.9.1.1 "Sync Request" control from RFC 4533, which is the basis of "Syncrepl" directory replication in OpenLDAP and 389-ds.
(Though "standard" does not guarantee that IBM's LDAP server will support it – or that it's enabled on your server specifically, similar to how OpenLDAP requires loading the "syncprov" overlay first.)
2.2. Sync Request Control
The Sync Request Control is an LDAP Control [RFC4511] where the
controlType is the object identifier 1.3.6.1.4.1.4203.1.9.1.1 and the
controlValue, an OCTET STRING, contains a BER-encoded
syncRequestValue. The criticality field is either TRUE or FALSE.
syncRequestValue ::= SEQUENCE {
mode ENUMERATED {
-- 0 unused
refreshOnly (1),
-- 2 reserved
refreshAndPersist (3)
},
cookie syncCookie OPTIONAL,
reloadHint BOOLEAN DEFAULT FALSE
}
The Sync Request Control is only applicable to the SearchRequest
Message.
Although dotnet doesn't support this control natively (it seems to focus on just supporting Active Directory extensions), it should be possible to create a custom class similar to the Dir­Sync­Request­Control class with the correct OID and correct BER serialization (and somehow handle the "Sync Done" control that delivers the final sync cookie to you, etc).
OpenLDAP's ldapsearch supports calling this control via ldapsearch -E sync=rp[/cookie]. On the server side, slapd supports this control for databases that have the "syncprov" overlay loaded (which is required for replication).
389-ds (Red Hat Directory Server) supports this control if the plug-in is enabled.
The other approach is to have a persistent search for (modifyTimestamp>=...) and keep track of the last received entry change timestamp in place of the "cookie". This isn't very accurate, unfortunately.

Related

Getting Error on Kendo UI Folder Upload - ERR_HTTP2_PROTOCOL_ERROR

I am using Telerik Kendo File Upload for uploading folder.
In Production environment, few users are complaining issue with Folder Upload, during upload few files get errored out, using Developer tool in the console tab it logs "ERR_HTTP2_PROTOCOL_ERROR" error as attached for the failed files.
When i am trying i am not getting this error and all folders are getting uploaded properly. I asked user to share the files for which they are facing error and when i tried it uploaded successfully. When user tried again uploading same files which errored out it got succeeded today which were failing yesterday but sill there are files which is giving the same error.
I went through a post where it say the problem could be due to use of HTTP/2 and when they switched to HTTP /1.1 it worked fine. We are also using HTTP/2 but we don't have option of going back to HTTP/1.1. Link below :
https://www.telerik.com/forums/problems-with-multi-file-upload-and-http-2
Any suggestions ?
This is because on your clients machine http/2 is not enabled thus the error prompts.
If you look in your local machine you will see that under your server, you have Https protocol enabled and a valid certificate.
Your clients either lack a valid certificate on the server or are using the site through Http protocol.
you can learn more here:
Http/2 explanation
SETTINGS_MAX_CONCURRENT_STREAMS (0x3):
Indicates the maximum number of concurrent streams that the sender will allow. This limit is directional: it applies to the number of streams that the sender permits the receiver to create. Initially, there is no limit to this value. It is recommended that this value be no smaller than 100, so as to not unnecessarily limit parallelism.
A value of 0 for SETTINGS_MAX_CONCURRENT_STREAMS SHOULD NOT be treated as special by endpoints. A zero value does prevent the creation of new streams; however, this can also happen for any limit that is exhausted with active streams. Servers SHOULD only set a zero value for short durations; if a server does not wish to accept requests, closing the connection is more appropriate.
Resolution : : Add “Http2MaxConcurrentClientStreams” under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\HTTP\Parameters
In Registry and restart server.
Set this value to 100 or >100

In Dotnet, how do you listen to events from specific ETW providers given you have just the GUID of a provider?

I have a GUID of a provider, in Dotnet how do I listen to all events generated by that provider?
I have two external tools that allow me to listen, one is tracelog from the SDK, I run
tracelog -start <providerName> -b 8192 -seq 2000 -f somefile.bin -guid #00000000,1111,2222,3333,444444555555
this tool simply enables the provider and returns. Then I run another custom/private tool (written in C++) that collects the events and logs them to file in a pretty formatted table.
I am trying to port all of this to my dotnet application, but I am not able to listen to the events from this given provider.
I tried looking at the MSDN documentation for EventProviderTraceListener, there is a code example in there but I don't understand how that's supposed to enable me to listen to an arbitrary provider.
I also saw this post about TraceEventSession, which I used to try the following code:
TraceEventSession traceEventSession = new TraceEventSession("myname");
traceEventSession.EnableProvider(new Guid("00000000-1111-2222-3333-444444555555"), TraceEventLevel.Always);
traceEventSession.Source.Dynamic.AddCallbackForProviderEvents(null, #event =>
{
System.Windows.Forms.MessageBox.Show("Event " + #event.EventName);
});
traceEventSession.Source.Process();
the callback is invoked, but only twice for a different provider GUID and for event names that my provider is not supposed to send (the event names are EventTrace/BuildInfo and EventTrace/DbgIdRsds).
I also looked at this documentation, that shows how to listen in real time to the events from the kernel provider, and it looks like my code does the same things (although for a non-standard provider).
Why am I getting those two events, and how can I
enable the event listening for the provider whose GUID I have, and
capture all the events from that provider only? (the provider may be issuing events from usermode and/or kernelmode, I don't know if this makes a difference)
When I run the separate tools the collection works fine and all the events are captured without issues (they are generated quite frequently, every second, so it's not a matter of me not waiting long enough).

When listing a Drive folder's changes (via ChangeResource) for the first time, what page token should be used?

Lets say the user already has files synchronized (via my app) to their Drive folder. Now they sign into my app on a second device and is ready to sync files for the first time. Do I use the Changes API for the initial sync process?
I ask because using the Changes API requires a StartPageToken, which requires that there had been a previous sync operation. Well there is no possible way for user to already have a StartPageToken if they are synchronizing data on a device for the first time.
Google's documentation is a joke. They shouldn't leave it up to us to read between the lines and just figure this out. I'm sure I can cook up something that will "work", but how do I ever know that it is the "appropriate" and EFFICIENT way to go about handling this?
public async Task<AccessResult> GetChangesAsync(CancellationToken cancellationToken, string fields = "*")
{
ChangesResource.ListRequest listRequest = new ChangesResource.ListRequest(DriveService, startPageToken)
{
Spaces = Folder_appDataFolder,
Fields = fields + ", nextPageToken",
IncludeRemoved = true,
PageSize = 20
};
ChangeList changeList = await listRequest.ExecuteAsync(cancellationToken);
}
Here, I am looking to start syncing the user's for the first time and so a page token doesn't even make sense for that because during the first sync your goal is to get all of the users data. From then on you are looking to only sync any further changes.
One approach I thought of is to simply use ListRequest to list all of the users data and start downloading files that way. I can then simply request a start page token and store it to be used during sync attempts that occur later...
...But what if during the initial download of the user's files (800 files, for example) an error occurs, and the ListRequest fails on file 423? Because I cannot attain a StartPageToken in the middle of a ListRequest to store in case of emergency, do I have to start all over and download all 800 files again, instead of starting at file 423?
When doing changes.list for the first time you should call getStartPageToken this will return the page token you can use to get the change list. If its the first time then there will be no changes of course.
If the user is using your application from more then one device then the logical course of action would be for you to save the page token in a central location when the user started the application for the first time on the first deceive. This will enable you to use that same token on all additional devices that the user may chose to use.
This could be on your own server or even in the users app data folder on drive
I am not exactly user what your application is doing but i really dont think you should be downloading the users files unless they try to access it. There is no logical reason i can think of for your application to store a mirror image of a users drive account. Access the data they need when they need it. You shouldn't need everything. Again i dont know exactly what your application does.

Converting Microsoft EWS StreamingNotification Example to a service

I've been working to try and convert Microsoft's EWS Streaming Notification Example to a service
( MS source http://www.microsoft.com/en-us/download/details.aspx?id=27154).
I tested it as a console app. I then used a generic service template and got it to the point it would compile, install, and start. It stops after about 10 seconds with the ubiquitous "the service on local computer started and then stopped."
So I went back in and upgraded to C# 2013 express and used NLog to put a bunch of log trace commands to so I could see where it was when it exited.
The last place I can find it is in the example code, SynchronizationChanges function,
public static void SynchronizeChanges(FolderId folderId)
{
logger.Trace("Entering SynchronizeChanges");
bool moreChangesAvailable;
do
{
logger.Trace("Synchronizing changes...");
//Console.WriteLine("Synchronizing changes...");
// Get all changes since the last call. The synchronization cookie is stored in the
// _SynchronizationState field.
// Only the the ids are requested. Additional properties should be fetched via GetItem
//calls.
logger.Trace("Getting changes into var changes.");
var changes = _ExchangeService.SyncFolderItems(folderId, PropertySet.IdOnly, null, 512,
SyncFolderItemsScope.NormalItems,
_SynchronizationState);
// Update the synchronization cookie
logger.Trace("Updating _SynchronizationState");
the log file shows the trace message ""Getting changes into var changes." but not the "Updating _SynchronizationState" message.
so it never gets past var changes = _ExchangeService.SyncFolderItems
I cannot for the life figure out why its just exiting. There are many examples of EWS streaming notifications. I have 3 that compile and run just fine but nobody as far as I can tell has posted an example of it done as a service.
If you don't see the "Updating..." message it's likely the sync threw an exception. Wrap it in a try/catch.
OK, so now that I see the error, this looks like your garden-variety permissions problem. When you ran this as a console app, you likely presented the default credentials to Exchange, which were for your login ID. For a Windows service, if you're running the service with one of the built-in accounts (e.g. Local System), your default credentials will not have access to Exchange.
To rectify, either (1) run the service under the account you did the console app with, or (2) add those credentials to the Exchange Service object.

Connecting to a network drive programmatically and caching credentials

I'm finally set up to be able to work from home via VPN (using Shrew as a client), and I only have one annoyance. We use some batch files to upload config files to a network drive. Works fine from work, and from my team lead's laptop, but both of those machines are on the domain. My home system is not, and won't be, so when I run the batch file, I get a ton of "invalid drive" errors because I'm not a domain user.
The solution I've found so far is to make a batch file with the following:
explorer \\MACHINE1
explorer \\MACHINE2
explorer \\MACHINE3
Then manually login to each machine using my domain credentials as they pop up. Unfortunately, there are around 10 machines I may need to use, and it's a pain to keep entering the password if I missed one that a batch file requires.
I'm looking into using the answer to this question to make a little C# app that'll take the login info once and login programmatically. Will the authentication be shared automatically with Explorer, or is there anything special I need to do? If it does work, how long are the credentials cached?
Is there an app that does something like this automatically?
Unfortunately, domain authentication via the VPN isn't an option, according to our admin.
EDIT: If there's a way to pass login info to Explorer via the command line, that would be even easier using Ruby and highline.
EDIT: In case anyone else has the same problem, here's the solution I wound up using. It requires Ruby and the Highline gem.
require "highline/import"
domain = ask("Domain: ")
username = ask("Username: ")
password = ask("Password: ") { |q| q.echo = false }
machines = [
'\\MACHINE1\SHARE',
'\\MACHINE2\SHARE',
'\\MACHINE3\SHARE',
'\\MACHINE4\SHARE',
'\\MACHINE5\SHARE'
]
drives = ('f'..'z').to_a[-machines.length..-1]
drives.each{|d| system("net use #{d}: /delete >nul 2>nul"); }
machines.zip(drives).each{|machine, drive| system("net use #{drive}: #{machine} #{password} /user:#{domain}\\#{username} >nul 2>nul")}
It'll figure out how many mapped drives I need, then start mapping them to the requested shares. In this case, it maps them from V: to Z:, and assumes I don't have anything shared with those drive letters.
If you already have an Explorer window open to one of the shares, it may give an error, so before I ran the Ruby script, I ran:
net use * /delete
That cleared up the "multiple connections to a share not permitted" error, and allowed me to connect with no problems.
You could create a batch file that uses "NET USE" to connect to your shares. You'd need to use a drive letter for each share, but it'd be super simple to implement.
Your batch file would look like this:
net use h: \\MACHINE1 <password> /user:<domain>\<user>
net use i: \\MACHINE2 <password> /user:<domain>\<user>
net use j: \\MACHINE3 <password> /user:<domain>\<user>
UPDATE
Whether the connection remains or not depends upon what you specified for the /persistent switch. If you specified yes, then it will attempt to reconnect upon your next logon. If you specified no then it won't. The worrying this is the documentation says that it defaults to the value that you used last!
If you specified no, the connection will remain until you next reboot. If you drop your VPN connection the drive would be unavailable (but if you reconnect to the VPN the drive should be available as long as you haven't removed it).
I don't know of a way to use it without mapping to a drive letter, the documentation would lead you to believe that it isn't possible.
I understand your problem, that you're just trying to give explorer the correct credentials so it stops nagging you with login boxes. Using mapped drives though not perfect will at least alleviate your pain.
to pass credential by command line to the explorer you should take a look into the command net use
Use API WNetAddConnection2() via P/Invoke.

Categories