C#: Dynamically instantiate different classes in the same statement? - c#

Here is a simplified version of what I'm trying to do:
Without having multiple if..else clauses and switch blocks, can I mimic the behavior of Javascript's eval() shudder to instantiate a class in C#?
// Determine report orientation -- Portrait or Landscape
// There are 2 differently styled reports (beyond paper orientation)
string reportType = "Portrait";
GenericReport report;
report = new eval(reportType + "Report()"); // Resolves to PortraitReport()
The need stems from the fact that I have 6 types of Crystal Reports (that do the same thing, but look drastically different) for 50 states. There are 3 styles each, rather than entertain the notion of a giant switch block with nested if..else statements determining which of 900 reports to use, I was hoping for an eval-like solution.

You could use Activator.CreateInstance("myAssembly", "PortrainReport");. Although the more readable way would be to create a Portrait Factory, which would create the correct type for you.

As people specified above, you can use Activator class to create an instance of the class by its text name.
But, there is one more option.
When you told about using eval like function in c# i assumed, you not only want to create an instance of the class by its text name, but also fill it with properties from the same string.
For this purpose you need to use deserialization.
Deserialization converts string like representation of the class into its instance and restoring all its properties that was specified in the string.
Xml serialization. Its using XML file for converting into instance.
Here is small example:
public class Report1
{
public string Orientation {get;set;}
public string ReportParameter1 {get;set;}
public string ReportParameter2 {get;set;}
}
Above is the class that you want to instantiate and fill with parameters by string line.
Below is XML that can do that:
<?xml version="1.0"?>
<Report1>
<Orientation>Landscape</Orientation>
<ReportParameter1>Page1</ReportParameter1>
<ReportParameter2>Colorado</ReportParameter2>
</Report1>
To create an instance from the file use System.Xml.Serialization.XmlSerializer :
string xml = #"<?xml version=""1.0""?>
<Report1>
<Orientation>Landscape</Orientation>
<ReportParameter1>Page1</ReportParameter1>
<ReportParameter2>Colorado</ReportParameter2>
</Report1>";
///Create stream for serializer and put there our xml
MemoryStream str = new MemoryStream(ASCIIEncoding.ASCII.GetBytes(xml));
///Getting type that we are expecting. We are doing it by passing proper namespace and class name string
Type expectingType = Assembly.GetExecutingAssembly().GetType("ConsoleApplication1.Report1");
XmlSerializer ser = new XmlSerializer(expectingType);
///Deserializing the xml into the object
object obj = ser.Deserialize(str);
///Now we have our report instance initialized
Report1 report = obj as Report1;
In this way you can prepare appropriate xml as string concatenation. That xml will contain all parameters for your report.
Then, you can convert it into the proper type.

Look at the Activator create instance method

All the classes will need to adhere to an interface. Then make an Generic Method which will be your eval and requires that interface. Here is an example of this (call the Usage static to see it in action):
public interface IOperation
{
string OutputDirection { get; set; }
};
public class MyOperation: IOperation
{
public string OutputDirection { get; set; }
}
public static class EvalExample
{
public static T Eval<T>( string direction ) where T : IOperation
{
T target = (T) Activator.CreateInstance( typeof( T ) );
target.OutputDirection = direction;
return target;
}
// Example only
public static void Usage()
{
MyOperation mv = Eval<MyOperation>( "Horizontal" );
Console.WriteLine( mv.OutputDirection ); // Horizontal
}
}

Using the factory pattern, and reflection (as explained in this blog post), you would get:
static void Main(string[] args)
{
ReportFactory<Report> factory = new ReportFactory<Report>();
Report r1 = factory.CreateObject("LandscapeReport");
Report r2 = factory.CreateObject("PortraitReport");
Console.WriteLine(r1.WhoAmI());
Console.WriteLine(r2.WhoAmI());
}
Which would output "Landscape" and "Portrait", respectivley.
Of course, for the plumbing, you need an interface that all your reports are based off of (which I assume you already have).
For this example:
public interface Report
{
string WhoAmI();
}
And the two implemenations:
public class PortraitReport : Report
{
public string WhoAmI()
{
return "Portrait";
}
}
public class LandscapeReport : Report
{
public string WhoAmI()
{
return "Landscape";
}
}
The secret is in the ReportFactory, which uses Reflection to see what other classes are based on Report, and automatically register them for use, which I think is pretty cool:
public class ReportFactory<Report>
{
private Dictionary<string, Type> reportMap = new Dictionary<string, Type>();
public ReportFactory()
{
Type[] reportTypes = Assembly.GetAssembly(typeof(Report)).GetTypes();
foreach (Type reportType in reportTypes)
{
if (!typeof(Report).IsAssignableFrom(reportType) || reportType == typeof(Report))
{
// reportType is not derived from Report
continue;
}
reportMap.Add(reportType.Name, reportType);
}
}
public Report CreateObject(string ReportName, params object[] args)
{
return (Report)Activator.CreateInstance(reportMap[ReportName], args);
}
}
So now all you have to do is just add any new implementations of Report in your assembly, and they will be available to the factory with no extra coding or changing other code files.

Related

Amazon Dynamo DB Persistent ORM

I am trying to create a generic wrapper library(C#/.NET) for AWS DynamoDB which can act as DAL(Data Access Layer). The applications consuming this library will not be tightly coupled with AWS libraries as there is a possibility that it can be changed later.
The structure of methods to be exposed from wrapper class are
InsertItem< T>(object) , UpdateItem< T>(object) , DeleteItem< T>(id/object),
List< T> GetAll() , T GetByParameter< T>(Id).
I see that there are three approaches to consume AWS DynamoDB services using AWSSDK.
Approach (1) : Low level Access - convert model to aws hashmap input structure and invoke getItem()/putItem() .
Approach (2) : High Level Access using Document - convert model to aws document model and passing document object to aws.
Approach (3) : High Level Access using Persistence - Using attribute DynamoDBTable in model to map model to dynamoDb table and using linq operation to get/update table.
In approach(1) & (2), i find it difficult to map the model to dynamoDB table. In approach (3), I see that i need to include the DynamoDB attributes in model class in application which would make it tightly coupled.
Is there any way to create mapping in runtime in this cases or is there any other approach?
I also thought whether i can json serialize/deserialize the model and inset into dynamoDB(In this case there would be only 2 columns - id, json body for any model).
Please correct me if i am wrong or missing something.
What follows is the solution I'm moving forward with, minus some extraneous details. The biggest challenge is reading data of arbitrary types, since you can't easily tell just from the JSON.
In my solution, the consuming application that chooses the types to write also knows how to identify which type to deserialize to when reading. This is necessary because I need to return multiple logs of different types, so it makes more sense to return the JSON to the consumer and let them deal with it. If you want to contain this in the DAL, you could add a Type column in the DB and convert it using Reflection, though I think you'd still have issues returning data for multiple types in one call.
The interface we present for consumption has read/write methods (you can add whatever else you need). Note that writing allows specification of T, but reading requires the caller to deserialize, as mentioned above.
public interface IDataAccess
{
Task WriteAsync<T>(Log<T> log) where T : class, new();
Task<IEnumerable<LogDb>> GetLogsAsync(long id);
}
A LogDb class contains the Persistence attributes:
[DynamoDBTable("TableName")]
public class LogDb
{
[DynamoDBHashKey("Id")]
public long Id{ get; set; }
[DynamoDBProperty(AttributeName = "Data", Converter = typeof(JsonStringConverter))]
public string DataJson { get; set; }
}
A generic Log<T> class used for strongly-typed writing. The ToDb() method is called in the IDataAccess implementation to actually write to the DB. The constructor taking in a LogDb would be used by a consuming application that had identified the appropriate type for deserialization:
public class Log<T>
where T : class, new()
{
public Log() { }
public Log(LogDb logDb)
{
Data = licensingLogDb.OldDataJson.Deserialize<T>();
Id = licensingLogDb.Id;
}
public long Id { get; set; }
public T Data { get; set; }
public LogDb ToDb()
{
string dataJson = Data.Serialize();
return new LogDb
{
DataJson = dataJson,
Id = Id
};
}
}
The JsonStringConverter used in the attributes on LogDb converts the JSON string in the DataJson property to and from a DynamoDB Document:
public class JsonStringConverter : IPropertyConverter
{
public DynamoDBEntry ToEntry(object value)
{
string json = value as string;
return !String.IsNullOrEmpty(json)
? Document.FromJson(json)
: null;
}
public object FromEntry(DynamoDBEntry entry)
{
var document = entry.AsDocument();
return document.ToJson();
}
}
A helper class provides the Serialize and Deserialize extensions, which use JSON.NET's JsonConvert.Serialize/Deserialize, but with null checks:
public static class JsonHelper
{
public static string Serialize(this object value)
{
return value != null
? JsonConvert.SerializeObject(value)
: String.Empty;
}
public static T Deserialize<T>(this string json)
where T : class, new()
{
return !String.IsNullOrWhiteSpace(json)
? JsonConvert.DeserializeObject<T>(json)
: null;
}
}
Generic method convert Dynamo table to c# class as an extension function.
public static List<T> ToMap<T>(this List<Document> item)
{
List<T> model = (List<T>)Activator.CreateInstance(typeof(List<T>));
foreach (Document doc in item)
{
T m = (T)Activator.CreateInstance(typeof(T));
var propTypes = m.GetType();
foreach (var attribute in doc.GetAttributeNames())
{
var property = doc[attribute];
if (property is Primitive)
{
var properties = propTypes.GetProperty(attribute);
if (properties != null)
{
var value = (Primitive)property;
if (value.Type == DynamoDBEntryType.String)
{
properties.SetValue(m, Convert.ToString(value.AsPrimitive().Value));
}
else if (value.Type == DynamoDBEntryType.Numeric)
{
properties.SetValue(m, Convert.ToInt32(value.AsPrimitive().Value));
}
}
}
else if (property is DynamoDBBool)
{
var booleanProperty = propTypes.GetProperty(attribute);
if (booleanProperty != null)
booleanProperty.SetValue(m, property.AsBoolean());
}
}
model.Add(m);
}
return model;
}

Protobuf with many unknown types

What is the best approach when I do not know at runtime which types are de/serialized with protobuf?
Currently I am playing with the idea to extend the RuntimeTypeModel in the type initializers of the types which are candidates for serialization which seems to work pretty well for serialiation. But when deserializing in a different process I would need to load the same type model from somewhere which was used to serialize the types. Is it possible to serialize the RuntimeTypeModel to disk to reuse it later when the serialized data is read again from disk? Ideally I would put the model into the serialized stream as well to have a full self describing object model. Or would I need to record the steps and put this data in front of my serialized stream?
One could create a header which contains the offset to the real data and the runtime type model and the length which would be pretty nice. Or is there a better approach how to deal with a plug in architecture where at serialization time I have all types registered but during deseralization I might stil need to load some types from their respective assemblies because the code was not yet touched?
using ProtoBuf;
using ProtoBuf.Meta;
using System.Collections.Generic;
using System.IO;
namespace protobuf
{
[ProtoContract]
public interface IAbstraction
{
[ProtoMember(1)]
string Name { get; set; }
}
[ProtoContract]
public class Base : IAbstraction
{
static Base()
{
ProtobufTypeModels.MainModel.Add(typeof(IAbstraction), true).AddSubType(101, typeof(Base));
}
[ProtoMember(1)]
public string Name { get; set; }
[ProtoMember(2, AsReference =true)]
public List<IAbstraction> Instances = new List<IAbstraction>();
}
[ProtoContract]
public class Next : Base
{
static Next()
{
ProtobufTypeModels.MainModel.Add(typeof(IAbstraction), true).AddSubType(100, typeof(Next));
}
[ProtoMember(1)]
public string NextName { get; set; }
}
public static class ProtobufTypeModels
{
public static readonly RuntimeTypeModel MainModel = TypeModel.Create();
}
class Program
{
static void Main(string[] args)
{
Base b = new Base { Name = "Alois" };
b.Instances.Add(new Next { Name = "Base", NextName = "Christian" });
b.Instances.Add(new Base { Name = "SecondBase", Instances = b.Instances });
var mem = new MemoryStream();
ProtobufTypeModels.MainModel.Serialize(mem, b);
mem.Position = 0;
var deser = (Base) ProtobufTypeModels.MainModel.Deserialize(mem, null, typeof(Base));
}
}
}
Random thought: you could use .Compile(serializerName,dllPath) after you've finished the initialization and write the baked serializer to disk; then you can reference it, use new SerializerName() to create the instance, and use the .Serialize etc methods from there. The dll will never change. This also means it never has to process any metadata ever again; no reflection, no IL emit, etc.
Other than that: we could possibly do something more gentle in terms of storing the configuration, but: protobuf-net doesn't currently add anything directly to support it, and it would probably be more relevant for you to have your own bespoke configuration data that you simply consume at startup.

Is this method returning a System.Object class an anti-pattern?

I work an an automation team designing tests for electronic components. One thing our framework sorely needs is a single source point for our driver objects for the various pieces of test equipment at a workbench (right now, driver object creation is very wild-west).
Basically, the idea would be there would be one object, constructed based on a configuration file(s), which is the single place all other test code looks to to get the driver objects, based on a name string. I'll call it a "DriverSource" here.
The problem is, these drivers do not present similar interfaces at all. One might be a power supply (with methods like "SetVoltage" and "SetCurrentLimit"), while another might be a digital multimeter (with methods like "ReadVoltage" or "ReadCurrent").
The best solution I've come up with is to have a method with the following declaration:
public object GetDriver(string name);
Then, the test code using my "DriverSource" object would call that method, and then cast the System.Object to the correct driver type (or more accurately, the correct driver interface, like IPowerSupply).
I think casting like that is acceptable because whatever test code is about to use this driver had better know what the interface is. But I was hoping to get some input on whether or not this is an anti-pattern waiting to bite me. Any better pattern for solving this issue would also be greatly appreciated.
A final note: I think this is obvious, but performance is essentially a non-issue for this problem. Fetching the drivers is something will happen less than 100 times in a test run that can last hours.
If you already know the type and you're going to cast to an interface or class anyway, a better approach would be to hand the method call a type parameter.
public T GetDriver<T>(string name);
You can then use a Factory pattern to return you an object of the appropriate type from the method.
public T GetDriver<T>(string name)
{
switch(typeof(T).Name)
{
case "Foo":
// Construct and return a Foo object
case "Bar":
// Construct and return a Bar object
case "Baz":
// Construct and return a Baz object
default:
return default(T);
}
}
Usage:
var driver = GetDriver<Foo>(someString); // Returns a Foo object
If you really want to make this generic, I would use a factory pattern.
Lets start off by identifying the type structure:
public interface IDriver
{
}
public interface IPowerSupply : IDriver
{
void SetVoltage();
void SetCurrent();
}
public interface IMultimeter : IDriver
{
double MeasureVoltage();
}
Which you can add to or remove from as needed. Now we need a way for the factory to auto-discover the correct types and provide the configuration information to it. So lets create a custom attribute:
public class DriverHandlerAttribute : Attribute
{
public Type DriverType { get; set; }
public string ConfigurationName { get; set; }
}
And then we need a place to store configuration data. This type can contain whatever you want, like a dictionary of keys/values that are loaded from configuration files:
public class Configuration
{
public string DriverName { get; set; }
public string OtherSetting { get; set; }
}
Finally we can create a driver. Lets create an IPowerSupply:
[DriverHandler(DriverType = typeof(IPowerSupply), ConfigurationName="BaseSupply")]
public class BasePowerSupply : IPowerSupply
{
public BasePowerSupply(Configuration config) { /* ... */ }
public void SetVoltage() { /* ... */ }
public void SetCurrent() { /* ... */ }
}
The important part is that it is decorated with the attribute and that it has a constructor (although I created the factory so that it can use default constructors too):
public static class DriverFactory
{
public static IDriver Create(Configuration config)
{
Type driverType = GetTypeForDriver(config.DriverName);
if (driverType == null) return null;
if (driverType.GetConstructor(new[] { typeof(Configuration) }) != null)
return Activator.CreateInstance(driverType, config) as IDriver;
else
return Activator.CreateInstance(driverType) as IDriver;
}
public static T Create<T>(Configuration config) where T : IDriver
{
return (T)Create(config);
}
private static Type GetTypeForDriver(string driverName)
{
var type = (from t in Assembly.GetExecutingAssembly().GetTypes()
let attrib = t.GetCustomAttribute<DriverHandlerAttribute>()
where attrib != null && attrib.ConfigurationName == driverName
select t).FirstOrDefault();
return type;
}
}
So to use this, you would read in the configuration data (loaded from XML, read from a service, files, etc). You can then create the driver like:
var driver = DriverFactory.Create(configuration);
Or if you are using the generic method and you know the configuration is for a power supply, you can call:
var driver = DriverFactory.Create<IPowerSupply>(configuration);
And when you run your tests, you can verify that you get the right data back, for example, in your test method:
Assert.IsTrue(driver is IPowerSupply);
Assert.IsTrue(driver is BaseSupply);
Assert.DoesWhatever(((IPowerSupply)driver).SetVoltage());
And so-on and so-forth.
I would go with this code:
public T GetDriver<T>(string name)
{
return ((Func<string, T>)_factories[typeof(T)])(name);
}
The _factories object looks like this:
private Dictionary<Type, Delegate> _factories = new Dictionary<Type, Delegate>()
{
{ typeof(Foo), (Delegate)(Func<string, Foo>)(s => new Foo(s)) },
{ typeof(Bar), (Delegate)(Func<string, Bar>)(s => new Bar()) },
{ typeof(Baz), (Delegate)(Func<string, Baz>)(s => new Baz()) },
};
Basically the _factories dictionary contains all of the code to create each object type based on string parameter passed in. Note that in my example above the Foo class takes s as a constructor parameter.
The dictionary can also then be modified at run-time to suite your needs without needing to recompile code.
I would even go one step further. If you define this factory class:
public class Factory
{
private Dictionary<Type, Delegate> _factories = new Dictionary<Type, Delegate>();
public T Build<T>(string name)
{
return ((Func<string, T>)_factories[typeof(T)])(name);
}
public void Define<T>(Func<string, T> create)
{
_factories.Add(typeof(T), create);
}
}
You can then write this code:
var drivers = new Factory();
drivers.Define(s => new Foo(s));
drivers.Define(s => new Bar());
drivers.Define(s => new Baz());
var driver = drivers.Build<Foo>("foo");
I like that even better. It's strongly-typed and easily customized at run-time.

A Template for Static Methods in C#

I am writing an application that allows a user to run a test. A test consists of a number of different objects, such as configuration, temperature, and benchmark. Settings and the like are saved back and forth between xml. I pass different XElements around in my code so I can build the final xml document differently for different situations. I wish to do something like this:
public abstract class BaseClass<T>
{
abstract static XElement Save(List<T>);
abstract static List<T> Load(XElement structure);
}
public class Configuration : BaseClass<Configuration>
{
public string Property1 { get; set; }
public string Property2 { get; set; }
//etc...
public static XElement Save(List<Configuration>)
{
XElement xRoot = new XElement("Root");
//etc...
return xRoot;
}
public static List<Configuration> Load(XElement structure)
{
List<BaseClass> list = new List<BaseClass>();
//etc...
return list;
}
}
public class Temperature : BaseClass<Temperature>
{
public float Value { get; set; }
public static XElement Save(List<Temperature>)
{
//save
}
public static List<Temperature> Load(XElement structure)
{
//load
}
}
[EDIT]: Revising question (Changed signatures of above functions)[/EDIT]
Of course, I am not actually allowed to override the static methods of BaseClass. What is the best way to approach this? I would like as much of the following to be valid as possible:
List<Temperature> mTemps = Temperature.Load(element);
List<Configuration> mConfigs = Configuration.Load(element);
Temperature.Save(mTemps);
Configuration.Save(mConfigs);
[EDIT]Changed intended usage code above[/EDIT]
The only solution I can think of is the following, which is NOT acceptable:
public class File
{
public static XElement Save(List<Temperature> temps)
{
//save temp.Value
}
public static XElement Save(List<Configuration> configs)
{
//save config.Property1
//save config.Property2
}
//etc...
}
Static methods aren't part of a class instance. So overriding them doesn't make any sense anyway. They can't access any nonstatic part of an instance that they happen to be a member of.
This is kind of a strategy pattern scenario, e.g. you could just have single static Load & Save methods that check the type of object passed to them, and act accordingly. But here's another slightly more clever way that uses generic types to create a prototype and call its method, allowing you to keep the logic within each derived object type.
(edit again)
Here's another crack at it, along the same lines as my original suggestion. I actually tested this and it works, so I think this is the best you can do to get all the functionality you are looking for (other than testing types and calling code conditionally). You still need to pass a type for Load, otherwise, the runtime would have no idea what kind of return is expected. But Save works universally. And the subclass implementations are strongly typed.
This just uses the first object in the list as its prototype, simple enough.
public interface IBaseObject
{
XmlElement Save(IEnumerable<IBaseObject> list);
IEnumerable<IBaseObject> Load(XmlElement element);
}
public interface IBaseObject<T> where T: IBaseObject
{
XmlElement Save(IEnumerable<T> list);
IEnumerable<T> Load(XmlElement element);
}
public class Temperature : IBaseObject<Temperature>, IBaseObject
{
public XmlElement Save(IEnumerable<Temperature> list)
{
throw new NotImplementedException("Save in Temperature was called");
}
public IEnumerable<Temperature> Load(XmlElement element)
{
throw new NotImplementedException("Load in Temperature was called");
}
// must implement the nongeneric interface explicitly as well
XmlElement IBaseObject.Save(IEnumerable<IBaseObject> list)
{
return Save((IEnumerable<Temperature>)list);
}
IEnumerable<IBaseObject> IBaseObject.Load(XmlElement element)
{
return Load(element);
}
}
// or whatever class you want your static methods living in
public class BaseObjectFile
{
public static XmlElement Save(IEnumerable<IBaseObject> list)
{
IBaseObject obj = list.DefaultIfEmpty(null).First(); // linq
return obj==null ? null : obj.Save(list);
}
public static IEnumerable<IBaseObject> Load<T>(XmlElement element)
where T: IBaseObject, new()
{
IBaseObject proto = new T();
return proto.Load(element);
}
}
(original edit)
This has a problem in that you must call the static methods with a type, e.g.
BaseClass<Temperature>.Load()
There is a way around this for the Save method, but part of what you want is not possible. The Load method cannot know what type of list to return because its only parameter has no information about the return type. Hence, it can't possibly decide which type to create as a prototype. So no matter what, if you wanted to use common Load method, you would have to pass it a type like the above syntax.
For the Save method, you could use reflection to create the prototype in the static method, by obtaining the type from the first element, and then call the Save method from the prototype. So if you only need the Save method to be used as you like, that much is possible.
Ultimately, though, I think it would be a lot simpler to do something like this:
public static XElement Save(List<IBaseClass> list)
{
if (list is Temperature) {
// do temperature code
} else if (list is SomethingElse) {
// do something else
}
}
Anyway - like I said it's going to require reflection to make even the Save method work in this way. I'd just use the simple approach.
(original bad code removed)
If you don't really care about the format in which its saved, you're free to use serialisation (which uses reflection internally).
string SerialiseToString<T>(T source)
{
using (StringWriter sw = new StringWriter() && XmlSerializer xml = new XmlSerializer(typeof(OrderedItem)))
{
xml.Serializer(sw, source);
return sw.ToString();
}
}
If you want to incorporate it into a larger part of your XML file, the easiest way would be to parse this output and add it to yours. Alternatively, you could reflect the properties yourself.
If the shared part is the same, you can put it in BaseClass:
public static XElement Save(IEnumerable<BaseClass> list)
{
var root = new XElement("root");
foreach (var item in list)
{
item.Save(root);
}
return root;
}
Here, Save(XElement) is a virtual method, each type implements it.
Obviously, you can't do this with loading, you either have to know what type are you loading, or have some way of finding out which type are you loading.

C# : manually reading XML config for derived classes

Suppose I have class CarResource,
class RaceCarResource : public CarResource,
and class SuperDuperUltraRaceCarResource : public RaceCarResource.
I want to be able to load their data using a single method LoadFromXML.
How would I go about getting the CarResource:LoadFromXML to load it's data,
RaceCarResource to call CarResource:LoadFromXML and then load it's own additional data, etc. ?
If I use XmlTextReader I only know how to parse the entire file in one go,
not how to use it so first CarResource:LoadFromXML can do its thing, then RaceCarResource, etc.
I hope it's at least a little bit clear what I mean :)
public class CarResource
{
public virtual void LoadFromXML(String xmlData)
{
...
}
}
public class RaceCarResource : CarResource
{
public override void LoadFromXML(String xmlData)
{
base.LoadFromXML(xmlData);
...
}
}
...and so on. The new keyword will hide the inheritted method but still allow it to be call-able from the child class.
As for actually parsing the XML, you have a couple of options. My first suggestion would be to read the entire XML file in to memory...and then use LINQ to XML to parse through and populate your classes. You could also try the XmlSerializer (LINQ to XML is easier to implement, but as the size of your code-base grows, Xml Serialization can make maintenance easier).
You could also use XML Serialization depending on the structure of your XML file to load from. It's possible to override the load method (and then override in subsequent classes) to load specific information - or just use attributes. See: http://msdn.microsoft.com/en-us/library/ms950721.aspx
You have a couple of options.
You can use Linq to XML to query the child entities and pass those nodes to your other classes. This is probably the most efficient way of doing it.
You could use an xmlnavigator, again only passing the appropriate child nodes...
see: Implementing my own XPathNavigator in C#
You could simply use xml serialization (XmlSerialize XmlDeserialize), see C# - How to xml deserialize object itself?
In order to use XML de-serialization, the instance method makes the current object effectively 'immutable', but I would suggest something like this:
public class CarResource
{
public CarResource LoadNewFromXML(string xml)
{
XmlSerializer ser = new XmlSerializer(this.GetType());
object o = null;
using (MemoryStream ms = new MemoryStream(Encoding.ASCII.GetBytes(xml)))
{
o = ser.Deserialize(ms);
}
return o as CarResource;
}
}
public class RaceCarResource : CarResource
{
}
public class SuperRaceCarResource : RaceCarResource
{
}
Your calling code then looks like:
RaceCarResource car = new RaceCarResource();
car = car.LoadNewFromXML("<RaceCarResource/>") as RaceCarResource;
SuperRaceCarResource sc = new SuperRaceCarResource();
sc = sc.LoadNewFromXML("<SuperRaceCarResource/>") as SuperRaceCarResource;
If your XML is not compatible with the .net XML serialisation, then the easiest way is to create a factory which detects which type of resource the XML represents, then handles that appropriately. If you want to put the parsing into your objects, then use a virtual method to parse the internals after creating the object:
class CarResource
{
public string Color { get; private set; }
internal virtual void ReadFrom(XmlReader xml)
{
this.Color = xml.GetAttribute("colour");
}
}
class RaceCarResource : CarResource
{
public string Sponsor { get; private set; }
internal override void ReadFrom(XmlReader xml)
{
base.ReadFrom(xml);
this.Sponsor = xml.GetAttribute("name-on-adverts");
}
}
class SuperDuperUltraRaceCarResource : RaceCarResource
{
public string Super { get; private set; }
internal override void ReadFrom(XmlReader xml)
{
base.ReadFrom(xml);
this.Super = xml.GetAttribute("soup");
}
}
class CarResourceFactory
{
public CarResource Read(XmlReader xml)
{
CarResource car;
switch (xml.LocalName)
{
case "ordinary-car": car = new CarResource(); break;
case "racecar": car = new RaceCarResource(); break;
case "super_duper": car = new SuperDuperUltraRaceCarResource(); break;
default: throw new XmlException();
}
XmlReader sub = xml.ReadSubtree();
car.ReadFrom(sub);
sub.Close();
return car;
}
}
This works OK if the XML for a sub-type has any child elements appended strictly after or before the content for the super-type. Otherwise you have to do more work to reuse the super-type's serialisation, breaking it up into smaller methods (eg the base has methods to load the number of wheels, doors, engine size; the race car calls LoadDoorData, LoadAeroFoilData, LoadWheelData if the XML for the race car has the aerofoil data in between the door and wheel data. For formats with no logical ordering imposed (XMI, RDF) you have to inspect the local name to decide which specialised method to call, which gets a bit messy if you want to combine it with virtual methods. In that case, it's better to use a separate serialisation helper.
Other mechanisms can be used in the factory if the set of types to be created is not fixed to a few types.

Categories