Handling different versions of classes in factory - c#

I am generating classes from XSD and need to populate the classes to serialize to xml.
I have different classes, containing all the info that goes into the classes generated.
The problem is that the generated classes come in versions, and properties in those classes are other classes in the same version.
class LocalData
{
public MyClass property { get; set; }
}
class XmlVersion1
{
public MyClassV1 property { get; set; }
}
class XmlVersion2
{
public MyClassV2 property { get; set; }
public MyClassXV2 newProperty { get; set; }
}
The data in MyClassV1 and V2 are basically the same, so the same code can be used.
I wanted to make a factory that just took the LocalData class and any of the versioned classes and populated the data in the versioned class, but I run into a problem when I want to do property = new MyClassVx, because the factory does not know which version it's supposed to create.
I could do
if (parameter is MyClassV1)
paramter.MyClassV1 = new MyClassV1
and so on, but that is a LOT of code.
This is for generating xml messages that are specified by an external company, and they come in different versions, and we have to be able to serialize and deserialize the content into our internal system.

We have not found a solution to this specific issue and chose to use AutoMapper which seems to solve our problem in a different way.
We made a tool that takes the generated classes and creates the mapping classes needed for AutoMapper through assembly. If you have large generated classes you could do this as well. We can now create thousands of lines of code needed to map classes. It solves an issue we had when mapping types of 'object' to specific classes. I don't know if it's helpful but there it is.

Idea is very simple. Factory will not care about version. But newer clients will always support old versions' features. If version 0.5 has a method to receive orders list (for example this is a shopping app) version 0.6 also should have same method. We have same structure and we are doing this way.

Related

Using interfaces in models with SQLite

Let's say I have an interface like this:
public interface IUser
{
int Id { get; }
string Name { get; }
List<IMonthlyBudget> MonthlyBudget { get; }
}
and then I have a model that implements this:
public class User : IUser
{
public int Id { get; set; }
public string Name { get; set; }
public List<IMonthlyBudget> MonthlyBudget { get; set; }
}
and here I have the IMonthlyBudget:
public interface IMonthlyBudget
{
int Id { get; }
float MonthlyMax { get; }
float CurrentSpending { get; }
float MonthlyIncome { get; }
}
Now I have my models. But the issue comes with using SQLite. SQLite can't understand what is the real implementation of IMonthlyBudget. I understand why, but I really don't want remove the interface and expose the real implementation to all the clients that use these models. In my project structure I have a Core project that has all the model interfaces, and the model implementation are in a data access project.
Is there something wrong with how I'm approaching this problem? I assume i'm not the first one to run into a issue like this. Isn't it completely normal practice to keep model interfaces (what repositories etc then use as their return types, parameters and stuff like that) and implement the actual concrete models in a data access project?
And can someone explain why I can't do this:
public class User : IUser
{
public int Id { get; set; }
public string Name { get; set; }
public List<MonthlyBudget> MonthlyBudget { get; set; }
}
MonthlyBudget implements IMonthlyBudget, shouldn't it be completely fine to use the concrete model as the type instead of the the interface when the concrete model actually implements the interface?
A few questions here, so I'll break it down into sections:
Use of Interfaces
It is definitely good practice to interface classes that perform operations. For example, you may have a data service (i.e. data access layer) interface that allows you to do operations to read and modify data in your persistent store. However, you may have several implementations of that data service. One implementation may save to the file system, another to a DBMS, another is a mock for unit testing, etc.
However, in many cases you do not need to interface your model classes. If you're using an anemic business object approach (as opposed to rich business objects), then model classes in general should just be containers for data, or Plain Old CLR Objects (POCO). Meaning these objects don't have any real functionality to speak of and they don't reference any special libraries or classes. The only "functionality" I would put in a POCO is one that is dependent only upon itself. For example, if you have a User object that has a FirstName and LastName property, you could create a read-only property called FullName that returns a concatenation of the two.
POCOs are agnostic as to how they are populated and therefore can be utilized in any implementation of your data service.
This should be your default direction when using an anemic business object approach, but there is at least one exception I can think of where you may want to interface your models. You may want to support for example a SQLite data service, and a Realm (NoSQL) data service. Realm objects happen to require your models to derive from RealmObject. So, if you wanted to switch your data access layer between SQLite and Realm then you would have to interface your models as you are doing. I'm just using Realm as an example, but this would also hold true if you wanted to utilize your models across other platforms, like creating an observable base class in a UWP app for example.
The key litmus test to determining whether you should create interfaces for your models is to ask yourself this question:
"Will I need to consume these models in various consumers and will those consumers require me to define a specific base class for my models to work properly in those consumers?"
If the answer to this is "yes", then you should make interfaces for your models. If the answer is "no", then creating model interfaces is extraneous work and you can forego it and let your data service implementations deal with the specifics of their underlying data stores.
SQLite Issue
Whether you continue to use model interfaces or not, you should still have a data access implementation for SQLite which knows that it's dealing with SQLite-specific models and then you can do all your CRUD operations directly on those specific implementations of your model. Then since you're referring to a specific model implementation, SQLite should work as usual.
Type Compatibility
To answer your final question the type system does not see this...
List<IMonthlyBudget> MonthlyBudget
as being type-compatible with this...
List<MonthlyBudget> MonthlyBudget
In our minds it seems like if I have a list of apples, then it should be type-compatible with a list of fruit. The compiler sees an apple as a type of fruit, but not a list of apples as a type of a list of fruit. So you can't cast between them like this...
List<IMonthlyBudget> myMonthlyBudget = (List<IMonthlyBudget>) new List<MonthlyBudget>();
but you CAN add a MonthlyBudget object to a list of IMonthlyBudget objects like this...
List<IMonthlyBudget> myMonthlyBudget = new List<IMonthlyBudget>();
myMonthlyBudget.Add(new MonthlyBudget());
Also you can use the LINQ .Cast() method if you want to cast an entire list at once.
The reason behind this has to do with type variance. There's a good article on it here that can shed some light as to why:
Covariance and Contravariance
I hope that helps! :-)

Define contract to interact with a class (alternative to static interface)

I've seen this asked, but the standard answer is
An interface is a way to define a contract to interact with an object.
This is all and well, but I'm in need of a way for a class to describe itself to allow its creation. Specifically, I have interface ITicket which defines an object responsible for selling/buying assets. Different implementations require different parameters. My reflex would have been to do something that looks like:
public interface ITicket{
static List<TicketOptions> GetAvailableOptions();
}
public class TicketOption{
public string Label { get; set; }
public string Type { get; set; }
public string Default { get; set; }
}
Then I could have selected an implementation of ITicket in my GUI, and looped over the parameters to create an interface with IntegerUpDown controls for integers, DecimalUpDown controls for decimals and dropdown boxes for Enums.
Alas, C# won't let me. So here I am, looking for an equivalent. Surely there must be a pattern to let me define a contract to interact with a class without an instance?
Edit: Getting into more details...
My C# application loads IronPython scripts. It scans the /Scripts folder and assumes every python file in there contains a class called Ticket implementing ITicket.
I would like to get a list of available parameters for every script to build an interface. This way developpers can create python scripts that they drop into a folder that add new complex behavior without re-compiling the application.
Everything works well, except automatically (and cleanly) knowing what parameters are needed.

Working with different objects that inherit interface

I've been working on learning how to use interfaces correctly in c# and I think I mostly understand how they should be used but still feel confused about certain things.
I want to create a program that will create a CSV from Sales Orders or Invoices. Since they are both very similar I figured I could create an IDocument interface that could be used to make a CSV document.
class Invoice : IDocument
{
public Address billingAddress { get; set; }
public Address shippingAddress { get; set; }
public Customer customer { get; set; }
public List<DocumentLine> lines { get; set; }
// other class specific info for invoice goes here
}
I can create a method CreateCSV(IDocument) but how would I deal with the few fields that differ from Sales Orders and Invoices? Is this a bad use of interfaces?
You don't inherit interfaces, you implement them; and in this case the interface is an abstraction; it says "all things that implement this interface have the following common characteristics (properties, methods, etc)"
In your case, you have found that in fact Invoices and Sales Orders don't quite share the exact same characteristics.
Therefore from the point of view of representing them in CSV format, it's not a great abstraction (although for other things, like calculating the value of the document, it's an excellent one)
There are a number of ways you can work around this though, here are two (of many)
Delegate the work to the classes
You can declare an ICanDoCSVToo interface that returns the document in some kind of structure that represents CSV (let's say a CSVFormat class that wraps a collection of Fields and Values).
Then you can implement this on both Invoices and Sales Orders, specifically for those use cases, and when you want to turn either of them into CSV format, you pass them by the ICanDoCSVToo interface.
However I personally don't like that as you don't really want your Business Logic mixed up with your export/formatting logic - that's a violation of the SRP. Note you can achieve the same effect with abstract classes but ultimately it's the same concept - you allow someone to tell the class that knows about itself, to do the work.
Delegate the work to specialised objects via a factory
You can also create a Factory class - let's say a CSVFormatterFactory, which given an IDocument object figures out which formatter to return - here is a simple example
public class CSVFormatterLibrary
{
public ICSVFormatter GetFormatter(IDocument document)
{
//we've added DocType to IDocument to identify the document type.
if(document.DocType==DocumentTypes.Invoice)
{
return new InvoiceCSVFormatter(document);
}
if (document.DocType==DocumentTypes.SalesOrders)
{
return new SalesOrderCSVFormatter(document);
}
//And so on
}
}
In reality, you'd might make this generic and use an IOC library to worry about which concrete implementation you would return, but it's the same concept.
The individual formatters themselves can then cast the IDocument to the correct concrete type, and then do whatever is specifically required to produce a CSV representation of that specialised type.
There are other ways to handle this as well, but the factory option is reasonably simple and should get you up and running whilst you consider the other options.

WCF Contracts without the annotations [duplicate]

This question already has answers here:
Is DataContract attributes required for WCF
(4 answers)
Closed 9 years ago.
I was wondering if there is any way to define a WCF Contract class without using the [DataContract] and [DataMember] annotation. The reason is that domain model we currently have is fairly clean so we would like to keep it this way. Whats the best practice here? Create a Transfer object and copy the domain model object into a transfer object (that has the required annotations and is the Contract transfered between Client and Server)? Or somehow not annotate the object model and specify the contract in a different way.
If you do not add any serialization attributes to your class, and use it as part of a WCF service contract method, WCF will use the default serialization rules to produce a data contract anyway. This means that the class will implicitly become a [DataContract] every public property that has both a get and set accessor will implicitly become a [DataMember].
The only time you need to apply the attributes is if you want to override the default behavior, e.g. hiding some attributes, applying namespaces, etc. It's generally considered good practice to do so anyway, because relying on the default behavior might get you in trouble later. (It also makes it explicit that your class is meant for use by WCF). But it's not strictly required, as long as the default behavior meets your needs.
In response to your follow-up:
As far as I know there's no completely external way to change the serialization behavior of the DataContractSerializer for a given class; every option requires at least some level of attribution on the class being serialized. As #Yair Nevet describes below, my preferred method for turning existing domain objects into data contracts is the MetadataType attribute.
Alternatively, you can bypass the whole issue by doing what you suggested in your question: don't serialize your domain objects, but create custom DTO objects and serialize them. I tend to do this whenever I'm using the Entity Framework, for example, because serializing those can be tricky. This is also a good approach to take if your domain objects have lots of behaviors built into them -- you get a clear separation of "data being passed around" vs. "objects participating in my business logic."
You often end up with lots of redundant code, but it does achieve your goal of zero changes to your existing objects.
You can use the MetadataType attribute and a metadata model class in order to separate the annotations from your model.
For example:
[MetadataType(typeof(MyModelMetadata))]
public class MyModel : MyModelBase {
... /* the current model code */
}
[DataContract]
public class MyModelMetadata {
[DataMember]
public string Name { get; set; }
}
WCF is capable of serializing your objects without the attributes. The attributes are there to allow for customization. For example, the two classes will serialize identically by the DataContractSerializer:
public class Customer
{
public string FirstName { get; set; }
public string LastName { get; set; }
}
[DataContract]
public class Customer
{
[DataMember] public string FirstName { get; set; }
[DataMember] public string LastName { get; set; }
}
It is worth mentioning that you really should mark your class with the attributes. They aren't as "messy" as you think. It will actually save you from headache in the future. For example:
[DataContract(Name = "Customer")]
public class Customer
{
[DataMember(Name = "FirstName")]
public string FirstName { get; set; }
[DataMember(Name = "LastName")]
public string LastName { get; set; }
}
In the previous code sample, I explicitly set the names of the class and members. This will allow me to refactor the names without breaking consumers code. So, if someone decides that my class should be named CustomerDetail instead of Customer, I can still leave the name as Customer so that consumers of my service continue to work.
You could always use DTOs. Make a separate class that has everything that is needed to serialize your objects. Then project your domain model on to the DTO. You could use something like AutoMapper to make this process a little easier.
Regarding Performance
Unless you have hundreds, probably thousands, or objects or a very large number of properties per class, the act of converting to and from DTOs probably isn't that much performance overhead.
If you are using something like EF, and you are not serializing every property, you might even be able to reduce some overhead by projecting your EF query directly on to your DTOs.
This is kind of a dramatic case, but I had (poorly designed) database models with 50+ properties per type. By changing to DTOs that only have the 10-15 properties I cared about, I was able to almost double the performance of a WCF service.

Design Patterns for Objects in REST API's?

I've built a REST API with the WCF Web API Preview and I wanna build a library with the classes that you pass to this API (Just to make the .Net developers life easier). The should be simple POCO classes without much functionality.
But on the receiver side it would make sense for me to add some functionality to these classes. I have an example below:
[WebInvoke(UriTemplate = "", Method = "POST")]
public Supertext.API.Order Create(Supertext.API.Order apiOrder)
{
And this is an example POCO class:
public class Order
{
public string Service { get; set; }
public string OrderTitle { get; set; }
public string Currency { get; set; }
}
Now, what's a good way to extend this class on the server side?
I guess using a subclass would not work.
Delegates?
Actually have two different versions of the class? One for clients and one for the server?
What do other people do?
The problem with adding extra functionality to this POCO class is you are turning it into a domain object. The nature of this domain object will now be constrained by the fact that, essentially, this class acts as the definition of the interface into the operation. Changing details about this class will potentially break clients.
It is a far cleaner model to keep this class purely as a Data Transfer Object whose single responsibility is aiding the bridging of the wire format to objects and use a mapper such as AutoMapper to map the data from the DTO to a real domain object. The real domain object is fully under your control and you can happily refactor it without threatening a cascading effect to your service consumers

Categories