Adding New Relic's Custom Instrumentation to background process in Windows - c#

I am trying to monitor methods inside a .NET app which is a background process using New Relic for which I know I need to add Custom Instrumentation.
I have re-installed the .NET Agent, and after configuring "Instrument all .NET Applications", and making changes within the app.config and newrelic.config files, I am getting basic data of the background process in the new relic dashboard.
Now, to add Custom Instrumentation, I have added another instrumentation config file inside the extensions directory. Restarted the app, but still can't see the new/custom methods I am trying to monitor.
This is my instrumentation file MyInstrumentation.xml
<?xml version="1.0" encoding="utf-8"?>
<!-- instrument EngineService.BestAgentSolver.Solve inside EngineService.BestAgentSolver -->
<tracerFactory metricName="Cast-a-Net.EngineService.BestAgentSolver.Solve-Metric">
<match assemblyName="Cast-a-Net.EngineService" className="Cast-a-Net.EngineService.BestAgentSolver">
<exactMethodMatcher methodName="Solve" />
</match>
</tracerFactory>
<!-- instrument EngineService.SessonManager.BroadcastLeadCounts inside EngineService.SessionManager -->
<tracerFactory metricName="Cast-a-Net.EngineService.SessionManager.BroadcastLeadCounts-Metric">
<match assemblyName="Cast-a-Net.EngineService" className="Cast-a-Net.EngineService.SessionManager">
<exactMethodMatcher methodName="BroadcastLeadCounts" />
</match>
</tracerFactory>
<tracerFactory metricName="myapp.Web.Controllers.CallListController.ActionResult-Metric">
<match assemblyName="myapp.Web" className="myapp.Web.Controllers.CallListController">
<exactMethodMatcher methodName="ActionResult" />
</match>
</tracerFactory>
Am I missing a step or doing something wrong?

Custom instrumentation in the .NET agent works with web transactions that use the HttpContext object. Our .NET agent API, on the other hand, allows you to collect metrics that can be displayed in a custom dashboard. In particular, RecordMetric, RecordResponseTimeMetric, and IncrementCounter are useful because they work with non-web applications.
Starting with version 2.24.218.0 of the .NET agent however, a new feature can be used to create transactions where the agent would normally not do so. This is a manual process via a custom instrumentation file.
Create a custom instrumentation file named, say CustomInstrumentation.xml, in C:\ProgramData\New Relic.NET Agent\Extensions along side CoreInstrumentation.xml. Add the following content to your custom instrumentation file:
<?xml version="1.0" encoding="utf-8"?>
<extension xmlns="urn:newrelic-extension">
<instrumentation>
<tracerFactory name="NewRelic.Agent.Core.Tracer.Factories.BackgroundThreadTracerFactory" metricName="Category/Name">
<match assemblyName="AssemblyName" className="NameSpace.ClassName">
<exactMethodMatcher methodName="MethodName" />
</match>
</tracerFactory>
</instrumentation>
</extension>
You must change the attribute values Category/Name, AssemblyName, NameSpace.ClassName, and MethodName above:
The transaction starts when an object of type NameSpace.ClassName from assembly AssemblyName invokes the method MethodName. The transaction ends when the method returns or throws an exception. The transaction will be named Name and will be grouped into the transaction type specified by Category. In the New Relic UI you can select the transaction type from the Type drop down menu when viewing the Monitoring > Transactions page.
Note that both Category and Name must be present and must be separated by a slash.
As you would expect, instrumented activity (methods, database, externals) occurring during the method's invocation will be shown in the transaction's breakdown table and in transaction traces.
Here is a more concrete example. First, the instrumentation file:
<?xml version="1.0" encoding="utf-8"?>
<extension xmlns="urn:newrelic-extension">
<instrumentation>
<tracerFactory name="NewRelic.Agent.Core.Tracer.Factories.BackgroundThreadTracerFactory" metricName="Background/Bars">
<match assemblyName="Foo" className="Foo.Bar">
<exactMethodMatcher methodName="Bar1" />
<exactMethodMatcher methodName="Bar2" />
</match>
</tracerFactory>
<tracerFactory metricName="Custom/some custom metric name">
<match assemblyName="Foo" className="Foo.Bar">
<exactMethodMatcher methodName="Bar3" />
</match>
</tracerFactory>
</instrumentation>
</extension>
Now some code:
var foo = new Foo();
foo.Bar1(); // Creates a transaction named Bars in category Background
foo.Bar2(); // Same here.
foo.Bar3(); // Won't create a new transaction. See notes below.
public class Foo
{
// this will result in a transaction with an External Service request segment in the transaction trace
public void Bar1()
{
new WebClient().DownloadString("http://www.google.com/);
}
// this will result in a transaction that has one segment with a category of "Custom" and a name of "some custom metric name"
public void Bar2()
{
// the segment for Bar3 will contain your SQL query inside of it and possibly an execution plan
Bar3();
}
// if Bar3 is called directly, it won't get a transaction made for it.
// However, if it is called inside of Bar1 or Bar2 then it will show up as a segment containing the SQL query
private void Bar3()
{
using (var connection = new SqlConnection(ConnectionStrings["MsSqlConnection"].ConnectionString))
{
connection.Open();
using (var command = new SqlCommand("SELECT * FROM table", connection))
using (var reader = command.ExecuteReader())
{
reader.Read();
}
}
}
}
Here is a simple console app that demonstrates Custom Transactions:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Custom Transactions");
var t = new CustomTransaction();
for (int i = 0; i < 100; ++i )
t.StartTransaction();
}
}
class CustomTransaction
{
public void StartTransaction()
{
Console.WriteLine("StartTransaction");
Dummy();
}
void Dummy()
{
System.Threading.Thread.Sleep(5000);
}
}
}
Use the following custom instrumentation file:
<?xml version="1.0" encoding="utf-8"?>
<extension xmlns="urn:newrelic-extension">
<instrumentation>
<tracerFactory name="NewRelic.Agent.Core.Tracer.Factories.BackgroundThreadTracerFactory" metricName="Background/CustomTransaction">
<match assemblyName="ConsoleApplication1" className="ConsoleApplication1.CustomTransaction">
<exactMethodMatcher methodName="StartTransaction" />
</match>
</tracerFactory>
<tracerFactory metricName="Custom/Dummy">
<match assemblyName="ConsoleApplication1" className="ConsoleApplication1.CustomTransaction">
<exactMethodMatcher methodName="Dummy" />
</match>
</tracerFactory>
</instrumentation>
</extension>
After running the application a few times you should see a custom transaction in the Other Transactions, Background category. You should see the Dummy segment in the transactions breakdown table and transaction trace.

Related

What does SetApplicationName actually do in reference to AddDataProtection?

Consider this example code:
services.AddDataProtection()
.SetApplicationName("TestingApp");
The docs seem to indicate that setting the application name like this will allow two servers to know that they should use the same key for cookies (assuming that it is also persisted using something like PersistKeysToDbContext).
However, when I look at the persisted key, I don't see the shared application name anywhere in the document:
<key id="c34df9c6-bf38-49e8-97cf-dda5f8bc141c" version="1">
<creationDate>2021-03-23T21:47:20.4615267Z</creationDate>
<activationDate>2021-03-23T21:47:19.4492298Z</activationDate>
<expirationDate>2021-06-21T21:47:19.4492298Z</expirationDate>
<descriptor deserializerType="Microsoft.AspNetCore.DataProtection.AuthenticatedEncryption.ConfigurationModel.AuthenticatedEncryptorDescriptorDeserializer, Microsoft.AspNetCore.DataProtection, Version=3.1.13.0, Culture=neutral, PublicKeyToken=adb9793829ddae60">
<descriptor>
<encryption algorithm="AES_256_CBC" />
<validation algorithm="HMACSHA256" />
<masterKey xmlns:p4="http://schemas.asp.net/2015/03/dataProtection" p4:requiresEncryption="true">
<!-- Warning: the key below is in an unencrypted form. -->
<value>eNZoh1a2DEiEi03ae1aklP9dM3z____FAKE_____Pgy8bcVpkOI+q/I9d9iELvy+ptOW54Q==</value>
</masterKey>
</descriptor>
</descriptor>
</key>
The "Friendly Name" of the key seems to be a Guid.
What does SetApplication actually cause to change in the persisted xml?
SetApplicationName does not cause any change in persisted xml. In fact, you can change the Application name at any time after the XML is generated.
Application name is used as a master Purpose string for IDataProtectionProvider.
Consider the following code example:
// Startup.cs
services.AddDataProtection().SetApplicationName( "My application name" );
// ExampleConsumer.cs
public ExampleConsumer( IDataProtectionProvider protectionProvider )
{
var dataProtector = protectionProvider.CreateProtector( "My protector" );
}
If you run this code in a debugger and check the purposes, it will look like this:
protectionProvider.Purposes: ["My application name"]
dataProtector.Purposes: ["My application name", "My protector"]
For more information on purposes, see Purpose hierarchy.

Send large stream to ServiceFabric Service

I have a ServiceFabric service hosting WebAPI. On a controller, I receive, in my Request, a FileStream. I have no problem reading the FileStream there.
Then, I want this WebAPI service to call another SF service (stateful) - let's call it Service2, giving a MemoryStream in parameter.
try
{
await _service2Proxy.MyService2Method(myMemoryStream, otherParameters);
// Line after
}
catch
{
// Error handling
}
And in the Service2
public Task MyService2Method(MemoryStream ms, string otherParam)
{
// Log line
// Do something
}
It works perfectly well with a File < 3 MB. Yet, with a file > 5 MB, the call doesn't work. We never go on // Line after, // Error handling or // Log line.
I did add [assembly: FabricTransportServiceRemotingProvider(MaxMessageSize = int.MaxValue)] on the controller assembly, the WebAPI service assembly and the Service2 assembly.
The Service2 interface has the [OperationContract] and [ServiceContract] attributes.
I also tried sending a byte[] instead of a MemoryStream. The problem is still the same.
If it's a StatefulService and you use some ReliableDictionary with huge data, it could lead to similar issues when SF replicates your dictionary data.
You can set two more settings to prevent this:
Set the MaxReplicationMessageSize when you create the service instance.
Init your ServiceReplicaListener with custom FabricTransportListenerSettings : MaxMessageSize
Code:
public MyStateFulService(StatefulServiceContext context)
: base(context, new ReliableStateManager(context, new ReliableStateManagerConfiguration(new ReliableStateManagerReplicatorSettings
{
MaxReplicationMessageSize = 1073741824
}))){ }
protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
{
var setting = new FabricTransportListenerSettings();
setting.MaxMessageSize = 1073741824;
return new[] { new ServiceReplicaListener(initParams => new FabricTransportServiceRemotingListener(initParams, this, setting), "RpcListener")};
}
Edit :
A highly better way to do this: In case you have authentication between replica, you should set these settings in Settings.xml.
<?xml version="1.0" encoding="utf-8" ?>
<Settings xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/2011/01/fabric">
<!-- This is used by the StateManager's replicator. -->
<Section Name="ReplicatorConfig">
<Parameter Name="ReplicatorEndpoint" Value="ReplicatorEndpoint" />
<Parameter Name="MaxReplicationMessageSize" Value="1073741824" />
</Section>
<!-- This is used for securing StateManager's replication traffic. -->
<Section Name="ReplicatorSecurityConfig">
<Parameter Name="CredentialType" Value="Windows" />
<Parameter Name="ProtectionLevel" Value="None" />
</Section>
<!-- Add your custom configuration sections and parameters here. -->
<!--
<Section Name="MyConfigSection">
<Parameter Name="MyParameter" Value="Value1" />
</Section>
-->
</Settings>
It works fine for us.
Makes sure you are setting the assembly attribute properly.
https://msdn.microsoft.com/en-us/library/4w8c1y2s(v=vs.110).aspx
Here is what we are doing.
using Microsoft.ServiceFabric.Services.Remoting.FabricTransport;
[assembly: FabricTransportServiceRemotingProvider(MaxMessageSize = 134217728)]
Again make sure this is available in the assembly that is creating the service remoting listener and the assembly that is calling it with ServiceProxy.
Alternatively, you can set the max message size programmatically when you create the listener, or in a settings.xml config file. See here for more info on that: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-reliable-services-secure-communication/

How to add job with trigger for running Quartz.NET scheduler instance without restarting server?

Is it possible to add job with trigger for running Quartz.NET scheduler instance without restarting server?
A fairly robust implementation with ADOJobStore is to have a custom table to store jobs and create a class that inherits from ISchedulerPlugin and IJob to create schedules for your job automatically.
Your config will look like this:
<add key="quartz.plugin.sqlquartzjobs.type" value="(JobSchedulerPlugin assembly path)" />
<add key="quartz.plugin.sqlquartzjobs.RescanCronExpression" value="0 0/5 * * * ?" /> //plugin should fire every five minutes
<add key="quartz.plugin.sqlquartzjobs.ConnectionString" value="(your connection string)" />
Your plugin/job class can look like this:
public class JobSchedulerPlugin : ISchedulerPlugin, IJob
{
//Entry point for plugin, quartz server runs when it starts
public void Initialize(string pluginName, IScheduler sched)
{
Name = pluginName;
Scheduler = sched;
}
//Runs after Initialize()
public void Start()
{
//schedule plugin as a job
JobDataMap jobData = new JobDataMap();
jobData["ConnectionString"] = ConnectionString;
IJobDetail job = JobBuilder.Create(this.GetType())
.WithDescription("Job to rescan jobs from SQL db")
.WithIdentity(new JobKey(JobInitializationPluginJobName, JobInitializationPluginGroup))
.UsingJobData(jobData)
.Build();
TriggerKey triggerKey = new TriggerKey(JobInitializationPluginJobTriggerName, JobInitializationPluginGroup);
ITrigger trigger = TriggerBuilder.Create()
.WithCronSchedule(ConfigFileCronExpression)
.StartNow()
.WithDescription("trigger for sql job loader")
.WithIdentity(triggerKey)
.WithPriority(1)
.Build();
Scheduler.ScheduleJob(job, trigger);
}
}
Now JobSchedulerPlugin has entered a trigger into QRTZ_TRIGGERS that will fire every five minutes with highest priority. You can use it to load jobs from your custom table (let's call it QUARTZJOBS). QUARTZJOBS can contain information such as jobnames, assembly paths, dates, status, etc, anything that can be used to help you create triggers efficiently. It should also contain the cron expression to the job. This is what you can do when the trigger fires:
//Entry point of every job
public void Execute(IJobExecutionContext context)
{
Scheduler = context.Scheduler;
JobCollection jobs = LoadJobs(context.JobDetail.JobDataMap["ConnectionString"].ToString());
JobsWithTriggers jobTriggers = CreateTriggers(jobs);
SchedulerJob(jobTriggers);
}
//You can use ADO.NET or an ORM here to load job information from the the table
//and push it into a class.
protected JobCollection LoadJobs(string connectionString);
//In this class you can create JobDetails and ITriggers for each job
//and push them into a custom class
protected JobsWithTriggers CreateTriggers(jobs);
//Finally here you can schedule the jobs
protected void ScheduleJobs(jobstriggers)
In each of the classes above you can add custom validation for making sure triggers are handled appropriately if status or cron expression changes.
With this solution the server will never need to be restarted. The plugin/job class will scan the table and act accordingly.
What is your data store?
Here is one scenario... a little off the beaten path:
You can write a small console app (or similar) that is the "Job Populater".
You can wire it to pull job definitions from an xml file, and push them into ADO datastore (sql server).
Here is my quartz config to do this:
<quartz>
<!--
This configuration is a way to have jobs defined in xml, but will get them written to the database.
See https://stackoverflow.com/questions/21589964/ramjobstore-quartz-jobs-xml-to-adojobstore-data-move/
-->
<add key="quartz.plugin.xml.type" value="Quartz.Plugin.Xml.XMLSchedulingDataProcessorPlugin, Quartz" />
<add key="quartz.plugin.xml.fileNames" value="~/Quartz_Jobs_001.xml" />
<!--
<add key="quartz.plugin.xml.ScanInterval" value="10" />
-->
<add key="quartz.jobStore.type" value="Quartz.Impl.AdoJobStore.JobStoreTX, Quartz" />
<add key="quartz.jobStore.driverDelegateType" value="Quartz.Impl.AdoJobStore.SqlServerDelegate, Quartz"/>
<add key="quartz.jobStore.dataSource" value="default"/>
<add key="quartz.dataSource.default.connectionString" value="Server=MyServer\MyInstance;Database=QuartzDB;Trusted_Connection=True;Application Name='quartz_config';"/>
<add key="quartz.dataSource.default.provider" value="SqlServer-20"/>
</quartz>
Which (as you see in the comments in the xml), I got help with.
Here is the original :
RAMJobStore (quartz_jobs.xml) to AdoJobStore Data Move

How to get rid of app.config and move it all into code?

I tried this question in a generic way on this post: https://stackoverflow.com/q/18968846/147637
But that did not get us to the result.
Soooo, here it is concretely!
I have the code below. It works. In VS, you add a web reference, code up the below, and then.... start fiddling the app.config.
And it works.
But I need to get rid of the app config. It is a problem that crucial pieces of the code are not in the.... code. It is hard to document, and easy for folks looking at this example to forget to look in the app config (this is an example for other devs).
So the question is: How do I move the contents of app.config into code?
(I am a part part part time coder. Pointing me at generic documentation won't get me there, sorry to say!)
**// .cs file:**
using myNameSpace.joesWebService.WebAPI.SOAP;
namespace myNameSpace
{
class Program
{
static void Main(string[] args)
{
// create the SOAP client
joesWebServerClient server = new joesWebServerClient();
string payloadXML = Loadpayload(filename);
// Run the SOAP transaction
string response = server.WebProcessShipment(string.Format("{0}#{1}", Username, Password), payloadXML);
=================================================
**app.config**
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
</startup>
<system.serviceModel>
<bindings>
<basicHttpBinding>
<!-- Some non default stuff has been added by hand here -->
<binding name="IjoesWebServerbinding" maxBufferSize="256000000" maxReceivedMessageSize="256000000" />
</basicHttpBinding>
</bindings>
<client>
<endpoint address="http://joesWebServer/soap/IEntryPoint"
binding="basicHttpBinding" bindingConfiguration="IjoesWebServerbinding"
contract="myNameSpace.joesWebService.WebAPI.SOAP.IjoesWebServer"
name="IjoesWebServerSOAP" />
</client>
</system.serviceModel>
</configuration>
Generally speaking, a config file is preferred over hard-coding the settings because all you need to do with a config file is change the values you want to change and then restart the application. If they're hardcoded, you have to modify the source, recompile and redeploy.
Having said that, you can pretty much do everything in code that you do in the config file for WCF (I seem to recall a few exceptions, but don't remember them off hand).
One way to achieve what you're looking for is to define the binding in your code and create the client via ChannelFactory<T>, where T is the interface for your service (more accurately the service contract, which is usually in an interface and then implemented by a class).
For example:
using System.ServiceModel;
using myNameSpace.joesWebService.WebAPI.SOAP;
namespace myNameSpace
{
class Program
{
static void Main(string[] args)
{
// Create the binding
BasicHttpBinding myBinding = new BasicHttpBinding();
myBinding.MaxBufferSize = 256000000;
myBinding.MaxReceivedMessageSize = 256000000;
// Create the Channel Factory
ChannelFactory<IjoesWebServer> factory =
new ChannelFactory<IjoesWebServer>(myBinding, "http://joesWebServer/soap/IEntryPoint");
// Create, use and close the client
IjoesWebService client = null;
string payloadXML = Loadpayload(filename);
string response;
try
{
client = factory.CreateChannel();
((IClientChannel)client).Open();
response = client.WebProcessShipment(string.Format("{0}#{1}", Username, Password), payloadXML);
((IClientChannel)client).Close();
}
catch (Exception ex)
{
((ICientChannel)client).Abort();
// Do something with the error (ex.Message) here
}
}
}
Now you don't need a config file. The additional settings you had in the example are now in the code.
The advantage of ChannelFactory<T> is that once you create an instance of the factory, you can generate new channels (think of them as clients) at will by calling CreateChannel(). This will speed things up as most of your overhead will be in the creation of the factory.
An additional note - you're using I<name> in a lot of places in your config file. I usually denotes an interface, and if a full time developer were to look at your project it might be a little confusing for them at first glance.
With WCF 4.5, if you add a static config method to your WCF service class, then it will load automatically and ignore what's in app.config file.
<ServiceContract()>
Public Interface IWCFService
<OperationContract()>
Function GetData(ByVal value As Integer) As String
<OperationContract()>
Function GetDataUsingDataContract(ByVal composite As CompositeType) As CompositeType
End Interface
Public Class WCFService
Implements IWCFService
Public Shared Function CreateClient() As Object
End Function
Public Shared Sub Configure(config As ServiceConfiguration)
'Define service endpoint
config.AddServiceEndpoint(GetType(IWCFService), _
New NetNamedPipeBinding, _
New Uri("net.pipe://localhost/WCFService"))
'Define service behaviors
Dim myServiceBehaviors As New Description.ServiceDebugBehavior With {.IncludeExceptionDetailInFaults = True}
config.Description.Behaviors.Add(myServiceBehaviors)
End Sub
Public Function GetData(ByVal value As Integer) As String Implements IWCFService.GetData
Return String.Format("You entered: {0}", value)
End Function
Public Function GetDataUsingDataContract(ByVal composite As CompositeType) As CompositeType Implements IWCFService.GetDataUsingDataContract
End Function
End Class
I'm still looking into how to do the same for the client. I'll try to update when I figure it out if there's any interest.

How to Read Custom XML from the app.config?

I want to read the custom XML section from the app.config of a C# windows service.
How do I go about it?
The XML is below:
<Books>
<Book name="name1" title="title1"/>
<Book name="name2" title="title2"/>
</Books>
In a project I developed I use something similar for configuration that I found. I believe the article was called the last configuration section handler I'll ever need (I can't find a working link, maybe someone can link it for me).
This method takes what you want to do one step further, and actually de-serializes the object into memory. I'm just copying code from my project, but it should be fairly simple to take a step backwards if all you want is the XML.
First, you need to define a class that handles your configuration settings.
using System;
using System.Configuration;
using System.Xml;
using System.Xml.Serialization;
using System.Xml.XPath;
namespace Ariel.config
{
class XmlSerializerSectionHandler : IConfigurationSectionHandler
{
#region IConfigurationSectionHandler Members
public object Create(object parent, object configContext, XmlNode section)
{
XPathNavigator nav = section.CreateNavigator();
string typename = (string)nav.Evaluate("string(#type)");
Type t = Type.GetType(typename);
XmlSerializer ser = new XmlSerializer(t);
return ser.Deserialize(new XmlNodeReader(section));
}
#endregion
}
}
Now, say you want to load a section of configuration... super easy, cast to the type of object you're expecting to XML Serialize to, and pass the section you're looking for (in this case SearchSettings.
try
{
config = (Eagle.Search.SearchSettings)ConfigurationSettings.GetConfig("SearchSettings");
}
catch (System.Configuration.ConfigurationException ex)
{
syslog.FatalException("Loading search configuration failed, you likely have an error", ex);
return;
}
Now, all you need is your App.config file. I chose to split mine into separate files (1 file per section) just to make managing the config a little easier. You define a section, give it a name, and define the type (whatever you called the class listed above) and the assembly's name.
App.config:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<section name="SearchSettings" type="Ariel.config.XmlSerializerSectionHandler, Ariel"/>
</configSections>
<SearchSettings configSource="Config\Search.config" />
</configuration>
Now, all that's left is the config file to be de-serialized. What's important here is that the block matches your section name, and your type is whatever object it should de-serialize to, and the Assembly name.
<?xml version="1.0" encoding="utf-8" ?>
<SearchSettings type="Eagle.Search.SearchSettings, Eagle">
<NumThreads>4</NumThreads>
</SearchSettings>
If you just want the pure raw XML, all you should need to do is modify the Object that handles the section to return the XML or do whatever you need to do.
What you want to do is read up on Custom Configuration Sections.
Since IConfigurationSectionHandler is deprecated I thought it's worth mentioning that you can still implement a pure serialized section just by overriding ConfigurationSection.DeserializeSection and not calling the base implementation.
Here is a very basic example that I reuse a lot. A simple configuration section that loads an object graph from inline XAML. (Naturally you can implement with XmlSerializer instead)
using System.Configuration;
using System.Xaml;
using System.Xml;
...
public class XamlConfigurationSection<T> : ConfigurationSection
{
public static XamlConfigurationSection<T> Get(string sectionName)
{
return (XamlConfigurationSection<T>)ConfigurationManager.GetSection(sectionName);
}
public T Content { get; set; }
protected override void DeserializeSection(XmlReader xmlReader)
{
xmlReader.Read();
using (var xamlReader = new XamlXmlReader(xmlReader))
Content = (T)XamlServices.Load(xamlReader);
}
}
I use custom xml in my config.app. file and create a app.XSD from it.
The XSD file includes the schema of the config.app file.
Then XSD file can be translated to a vb class or a C# class file using 'xsd.exe'.
Now all you have to do is deserialize the configfile to the class.
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<section name="CustomConfig" type="Object" />
</configSections>
<CustomConfig>
<ActiveEnvironment>QAS</ActiveEnvironment>
<Environments>
<Environment name ="PRD" log="Y">
</Environment>
<Environment name ="QAS" log="N">
</Environment>
<Environment name ="DEV" log="Y">
</Environment>
</Environments>
</CustomConfig>
</configuration>

Categories