I'm getting this error:
Unhandled Exception: System.Runtime.InteropServices.COMException (0x80042001): Exception from HRESULT: 0x80042001
at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo)
at System.Management.ManagementEventWatcher.Start()
at MyNamespace.Program.Main(String[] args) in {somedir}\Program.cs:line 16
And here's my C# console app that I'm using to watch the registry:
using System;
using System.Management;
namespace MyNamespace
{
class Program
{
static void Main(string[] args)
{
var watcher = new ManagementEventWatcher(new WqlEventQuery("SELECT * FROM RegistryTreeChangeEvent"));
var handler = new MyHandler();
watcher.EventArrived += handler.Arrived;
//Start watching for events
watcher.Start();
while (handler.EventHasntFiredYet)
{
// Nothing.
}
//Stop watching
watcher.Stop();
}
public class MyHandler
{
public bool EventHasntFiredYet;
public MyHandler()
{
EventHasntFiredYet = true;
}
public void Arrived(object sender, EventArrivedEventArgs e)
{
var propertyDataCollection = e.NewEvent.Properties;
foreach (var p in propertyDataCollection)
{
Console.WriteLine("{0} -- {1}",p.Name,p.Value);
}
EventHasntFiredYet = false;
}
}
}
}
I'm trying to simply watch the registry for changes. Does anyone have any suggestions as to why this is failing?
It is an internal WMI error, WBEMESS_E_REGISTRATION_TOO_BROAD, "The provider registration overlaps with the system event domain."
That's about a good an error message as you'd ever get out of COM. Striking how much better the .NET exception messages are. Anyhoo, I'm fairly sure that what it means is "you are asking for WAY too many events". You'll have to be more selective in your query, use the WHERE clause. Like:
SELECT * FROM RegistryTreeChangeEvent
WHERE Hive='HKEY_LOCAL_MACHINE' AND
'RootPath='SOFTWARE\Microsoft'
Prompted by Giorgi, I found the MSDN page that documents the problem:
The following is an example of an incorrect registration.
SELECT * FROM RegistryTreeChangeEvent
WHERE hive = hkey_local_machine" OR
rootpath ="software"
Because there is no way to evaluate the possible values for each of the properties, WMI rejects with the error WBEM_E_TOO_BROAD any query that either does not have a WHERE clause or if the WHERE clause is too broad to be of any use.
As Hans has told you are receiving the error because you haven't specified where clause. According to Creating a Proper WHERE Clause for the Registry Provider you must specify where clause otherwise you will receive WBEM_E_TOO_BROAD error.
To simplify your code and not to reinvent the wheel you can use the following library: Asynchronous Registry Notification Using Strongly-typed WMI Classes in .NET
Related
I'm playing around with Q#, which uses C# as a driver. I'd like to pass a Qubit object to the Q# code but it isn't working as expected.
C# Driver
using Microsoft.Quantum.Simulation.Core;
using Microsoft.Quantum.Simulation.Simulators;
namespace Quantum.QSharpApplication1 {
class Driver {
static void Main(string[] args) {
using (var sim = new QuantumSimulator()) {
var x = new Microsoft.Quantum.Simulation.Common.QubitManager(10);
Qubit q1 = x.Allocate();
Solve.Run(sim, q1, 1);
}
System.Console.WriteLine("Press any key to continue...");
System.Console.ReadKey();
}
}
}
Q#
namespace Quantum.QSharpApplication1
{
open Microsoft.Quantum.Primitive;
open Microsoft.Quantum.Canon;
operation Solve (q : Qubit, sign : Int) : ()
{
body
{
let qp = M(q);
if (qp != Zero)
{
X(q);
}
H(q);
}
}
}
When I run this, it runs without error until it reaches the System.Console.* lines at which point it throws the following exception in the Q# code
System.AccessViolationException
HResult=0x80004003
Message=Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Source=<Cannot evaluate the exception source>
StackTrace:
<Cannot evaluate the exception stack trace>
The debugger associates this with the "let qp = M(q);" line in Q#.
Note this does happen in the Solve.Run call, the real code has multiple Solve calls and the output appear correct. It only appears to occur after the using QuantumSimulator scope is left. I recall reading that the Qubit must be reset to zero before it is released. I'm not sure if that is the problem here, but I don't see a way to do that in C#. Interesting I remove the Console lines, the program will run without error (timing?).
The QubitManager instance you used to create the qubits is not a singleton (each Simulator has its own QubitManager), therefore the Simulator is not aware of the Qubit your trying to manipulate on the Q# code, thus the AccessViolationException.
In general, creating Qubits on the driver is not supported; you can only allocate qubits using the allocate and borrowing statements inside Q#. The recommendation is to create an entry point in Q# to allocate the qubits which does the qubit allocation and call this from the driver, for example:
// MyOp.qs
operation EntryPoint() : ()
{
body
{
using (register = Qubit[2])
{
myOp(register);
}
}
}
// Driver.cs
EntryPoint.Run().Wait();
Finally, note that in your driver code you have this:
Solve.Run(sim, q1, 1);
The Run method returns a tasks that executes asynchronously. You must typically add a Wait() to make sure it finishes execution:
EntryPoint.Run(sim, 1).Wait();
If you do this you will notice that failure during the Run, not the Console.WriteLine.
Using the Microsoft.Diagnostics.Tracing EventSource library (not to be mistaken for the System.Diagnostics.Tracing), it is possible to log certain messages to the event viewer by adding an Attribute to the Event annotation called 'Channel'. However this dumps the output to the 'Windows Logs\Application' area. How can I get this to log to 'Applications and Service Logs\MyApp\MyFeature' ?
Example code:
[EventSource(Name = "MyDemoApp")]
public sealed class MyDemoEventSource : EventSource
{
private MyDemoEventSource () { }
...
public const EventTask MyDemoTask = (EventTask) 12345;
...
[Event(12345,
Message = "My Demo Error: {0}",
Level = EventLevel.Warning,
Channel = EventChannel.Admin,
Task = Tasks.MyDemoTask,
Keywords = Keywords.Rule,
Opcode = Opcodes.Fail)]
private void SomethingWentWrong(string ErrorMessage)
{
WriteEvent(12345, ErrorMessage);
}
With thanks to Matthew Watson for pointing me in the direction of this article, the solution to the problem is contained within:
https://blogs.msdn.microsoft.com/dotnet/2014/01/30/microsoft-diagnostics-tracing-eventsource-is-now-stable/
*Remember to register your EventSource as this is the step that actually creates the entries in the Event Viewer, a unique name is required (if your company/product already has an entry in the Event Viewer for other purposes make sure you use a new name).
I am getting info about users and groups from active directory. I am able to get various property values by way of an extension method.
public static string GetProperty(this Principal principal, string property)
{
DirectoryEntry directoryEntry = principal.GetUnderlyingObject() as DirectoryEntry;
if (directoryEntry.Properties.Contains(property))
{
return directoryEntry.Properties[property].Value.ToString();
}
return string.Empty;
}
The problem I am facing is I need to get info that is buried deeper like when was a user added to a group? And I would like to be able to do this through C#. Is this possible? Possibly using these Security Events? https://support.microsoft.com/en-us/kb/174074
Using ASP.NET MVC, C#, DirectoryServices.dll
Assuming that your machines set to collect those events you can read the event logs to get the information you're looking for... I wrote this example to get logoff event. You can lookup the event IDs on MSDN.
using System.Diagnostics;
namespace ReadEventLogs
{
class Program
{
public static void Main(string[] args)
{
System.Diagnostics.EventLog eventLog1 = new System.Diagnostics.EventLog("Security", ".");
foreach(EventLogEntry entry in eventLog1.Entries)
{
//Event ID 4624 LOGON
//EVent ID 4634 LOGOFF
if (entry.InstanceId == 4634)
{
Console.WriteLine(entry.Message);
}
}
}
}
}
If no computer name or server is specified the event logs will be read from local machine "."
I am currently using the Change Notifications in Active Directory Domain Services in .NET as described in this blog. This will return all events that happen on an selected object (or in the subtree of that object). I now want to filter the list of events for creation and deletion (and maybe undeletion) events.
I would like to tell the ChangeNotifier class to only observe create-/delete-/undelete-events. The other solution is to receive all events and filter them on my side. I know that in case of the deletion of an object, the atribute list that is returned will contain the attribute isDeleted with the value True. But is there a way to see if the event represents the creation of an object? In my tests the value for usnchanged is always usncreated+1 in case of userobjects and both are equal for OUs, but can this be assured in high-frequency ADs? It is also possible to compare the changed and modified timestamp. And how can I tell if an object has been undeleted?
Just for the record, here is the main part of the code from the blog:
public class ChangeNotifier : IDisposable
{
static void Main(string[] args)
{
using (LdapConnection connect = CreateConnection("localhost"))
{
using (ChangeNotifier notifier = new ChangeNotifier(connect))
{
//register some objects for notifications (limit 5)
notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
Console.WriteLine("Waiting for changes...");
Console.WriteLine();
Console.ReadLine();
}
}
}
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
Console.WriteLine(e.Result.DistinguishedName);
foreach (string attrib in e.Result.Attributes.AttributeNames)
{
foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
{
Console.WriteLine("\t{0}: {1}", attrib, item);
}
}
Console.WriteLine();
Console.WriteLine("====================");
Console.WriteLine();
}
LdapConnection _connection;
HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
public ChangeNotifier(LdapConnection connection)
{
_connection = connection;
_connection.AutoBind = true;
}
public void Register(string dn, SearchScope scope)
{
SearchRequest request = new SearchRequest(
dn, //root the search here
"(objectClass=*)", //very inclusive
scope, //any scope works
null //we are interested in all attributes
);
//register our search
request.Controls.Add(new DirectoryNotificationControl());
//we will send this async and register our callback
//note how we would like to have partial results
IAsyncResult result = _connection.BeginSendRequest(
request,
TimeSpan.FromDays(1), //set timeout to a day...
PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
Notify,
request
);
//store the hash for disposal later
_results.Add(result);
}
private void Notify(IAsyncResult result)
{
//since our search is long running, we don't want to use EndSendRequest
PartialResultsCollection prc = _connection.GetPartialResults(result);
foreach (SearchResultEntry entry in prc)
{
OnObjectChanged(new ObjectChangedEventArgs(entry));
}
}
private void OnObjectChanged(ObjectChangedEventArgs args)
{
if (ObjectChanged != null)
{
ObjectChanged(this, args);
}
}
public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
#region IDisposable Members
public void Dispose()
{
foreach (var result in _results)
{
//end each async search
_connection.Abort(result);
}
}
#endregion
}
public class ObjectChangedEventArgs : EventArgs
{
public ObjectChangedEventArgs(SearchResultEntry entry)
{
Result = entry;
}
public SearchResultEntry Result { get; set; }
}
I participated in a design review about five years back on a project that started out using AD change notification. Very similar questions to yours were asked. I can share what I remember, and don't think things have change much since then. We ended up switching to DirSync.
It didn't seem possible to get just creates & deletes from AD change notifications. We found change notification resulted enough events monitoring a large directory that notification processing could bottleneck and fall behind. This API is not designed for scale, but as I recall the performance/latency were not the primary reason we switched.
Yes, the usn relationship for new objects generally holds, although I think there are multi-dc scenarios where you can get usncreated == usnchanged for a new user, but we didn't test that extensively, because...
The important thing for us was that change notification only gives you reliable object creation detection under the unrealistic assumption that your machine is up 100% of the time! In production systems there are always some case where you need to reboot and catch up or re-synchronize, and we switched to DirSync because it has a robust way to handle those scenarios.
In our case it could block email to a new user for an indeterminate time if an object create were missed. That obviously wouldn't be good, we needed to be sure. For AD change notifications, getting that resync right that would have some more work and hard to test. But for DirSync, its more natural, and there's a fast-path resume mechanism that usually avoids resync. For safety I think we triggered a full re-synchronize every day.
DirSync is not as real-time as change notification, but its possible to get ~30-second average latency by issuing the DirSync query once a minute.
Currently I have a custom built static logging class in C# that can be called with the following code:
EventLogger.Log(EventLogger.EventType.Application, string.Format("AddData request from {0}", ipAddress));
When this is called it simply writes to a defined log file specified in a configuration file.
However, being that I have to log many, many events, my code is starting to become hard to read because all of the logging messages.
Is there an established way to more or less separate logging code from objects and methods in a C# class so code doesn't become unruly?
Thank you all in advance for your help as this is something I have been struggling with lately.
I like the AOP Features, that PostSharp offers. In my opinion Loggin is an aspect of any kind of software. Logging isn't the main value an application should provide.
So in my case, PostSharp always was fine. Spring.NET has also an AOP module which could be used to achieve this.
The most commonly used technique I have seen employs AOP in one form or another.
PostSharp is one product that does IL weaving as a form of AOP, though not the only way to do AOP in .NET.
A solution to this is to use Aspect-oriented programming in which you can separate these concerns. This is a pretty complex/invasive change though, so I'm not sure if it's feasible in your situation.
I used to have a custom built logger but recently changed to TracerX. This provides a simple way to instrument the code with different levels of severity. Loggers can be created with names closely related to the class etc that you are working with
It has a separate Viewer with a lot of filtering capabilities including logger, severity and so on.
http://tracerx.codeplex.com/
There is an article on it here: http://www.codeproject.com/KB/dotnet/TracerX.aspx
If your primary goal is to log function entry/exit points and occasional information in between, I've had good results with an Disposable logging object where the constructor traces the function entry, and Dispose() traces the exit. This allows calling code to simply wrap each method's code inside a single using statement. Methods are also provided for arbitrary logs in between. Here is a complete C# ETW event tracing class along with a function entry/exit wrapper:
using System;
using System.Diagnostics;
using System.Diagnostics.Tracing;
using System.Reflection;
using System.Runtime.CompilerServices;
namespace MyExample
{
// This class traces function entry/exit
// Constructor is used to automatically log function entry.
// Dispose is used to automatically log function exit.
// use "using(FnTraceWrap x = new FnTraceWrap()){ function code }" pattern for function entry/exit tracing
public class FnTraceWrap : IDisposable
{
string methodName;
string className;
private bool _disposed = false;
public FnTraceWrap()
{
StackFrame frame;
MethodBase method;
frame = new StackFrame(1);
method = frame.GetMethod();
this.methodName = method.Name;
this.className = method.DeclaringType.Name;
MyEventSourceClass.Log.TraceEnter(this.className, this.methodName);
}
public void TraceMessage(string format, params object[] args)
{
string message = String.Format(format, args);
MyEventSourceClass.Log.TraceMessage(message);
}
public void Dispose()
{
if (!this._disposed)
{
this._disposed = true;
MyEventSourceClass.Log.TraceExit(this.className, this.methodName);
}
}
}
[EventSource(Name = "MyEventSource")]
sealed class MyEventSourceClass : EventSource
{
// Global singleton instance
public static MyEventSourceClass Log = new MyEventSourceClass();
private MyEventSourceClass()
{
}
[Event(1, Opcode = EventOpcode.Info, Level = EventLevel.Informational)]
public void TraceMessage(string message)
{
WriteEvent(1, message);
}
[Event(2, Message = "{0}({1}) - {2}: {3}", Opcode = EventOpcode.Info, Level = EventLevel.Informational)]
public void TraceCodeLine([CallerFilePath] string filePath = "",
[CallerLineNumber] int line = 0,
[CallerMemberName] string memberName = "", string message = "")
{
WriteEvent(2, filePath, line, memberName, message);
}
// Function-level entry and exit tracing
[Event(3, Message = "Entering {0}.{1}", Opcode = EventOpcode.Start, Level = EventLevel.Informational)]
public void TraceEnter(string className, string methodName)
{
WriteEvent(3, className, methodName);
}
[Event(4, Message = "Exiting {0}.{1}", Opcode = EventOpcode.Stop, Level = EventLevel.Informational)]
public void TraceExit(string className, string methodName)
{
WriteEvent(4, className, methodName);
}
}
}
Code that uses it will look something like this:
public void DoWork(string foo)
{
using (FnTraceWrap fnTrace = new FnTraceWrap())
{
fnTrace.TraceMessage("Doing work on {0}.", foo);
/*
code ...
*/
}
}
To make the code readable, only log what you really need to (info/warning/error). Log debug messages during development, but remove most when you are finished. For trace logging, use
AOP to log simple things like method entry/exit (if you feel you need that kind of granularity).
Example:
public int SomeMethod(int arg)
{
Log.Trace("SomeClass.SomeMethod({0}), entering",arg); // A
if (arg < 0)
{
arg = -arg;
Log.Warn("Negative arg {0} was corrected", arg); // B
}
Log.Trace("SomeClass.SomeMethod({0}), returning.",arg); // C
return 2*arg;
}
In this example, the only necessary log statement is B. The log statements A and C are boilerplate, logging that you can leave to PostSharp to insert for you instead.
Also: in your example you can see that there is some form of "Action X invoked by Y", which suggests that a lot of your code could in fact be moved up to a higher level (e.g. Command/Filter).
Your proliferation of logging statements could be telling you something: that some form of design pattern could be used, which could also centralize a lot of the logging.
void DoSomething(Command command, User user)
{
Log.Info("Command {0} invoked by {1}", command, user);
command.Process(user);
}
I think it is a good option to implement something similar to filters in ASP.NET MVC. This is implement with the help of attributes and reflection. You mark every method you want to log in a certain way and enjoy. I suppose there might be a better way to do it, may be with the help of Observer pattern or something but as long as I thought about it I couldn't think of something better.
Basically such problems are called cross-cutting concerns and can be tackled with the help of AOP.
I also think that some interesting inheritance schema can be applied with log entities at the base but I would go for filters