I've built a REST API with the WCF Web API Preview and I wanna build a library with the classes that you pass to this API (Just to make the .Net developers life easier). The should be simple POCO classes without much functionality.
But on the receiver side it would make sense for me to add some functionality to these classes. I have an example below:
[WebInvoke(UriTemplate = "", Method = "POST")]
public Supertext.API.Order Create(Supertext.API.Order apiOrder)
{
And this is an example POCO class:
public class Order
{
public string Service { get; set; }
public string OrderTitle { get; set; }
public string Currency { get; set; }
}
Now, what's a good way to extend this class on the server side?
I guess using a subclass would not work.
Delegates?
Actually have two different versions of the class? One for clients and one for the server?
What do other people do?
The problem with adding extra functionality to this POCO class is you are turning it into a domain object. The nature of this domain object will now be constrained by the fact that, essentially, this class acts as the definition of the interface into the operation. Changing details about this class will potentially break clients.
It is a far cleaner model to keep this class purely as a Data Transfer Object whose single responsibility is aiding the bridging of the wire format to objects and use a mapper such as AutoMapper to map the data from the DTO to a real domain object. The real domain object is fully under your control and you can happily refactor it without threatening a cascading effect to your service consumers
Related
I know this might be an opinion-based question, but, I'd rather to ask as there might be some design principle for this.
I'm having a .net C# web API application to expose few APIs to retrieve some data from a database. I'm also using mediateR on this project. The APIs, they all get one request object but all of them have exact same properties. Imagine, we have a BaseProduct class from which ProductA, ProductB, and ProductChave been inherited in the domain project. Now, I need to expose APIs to return these three objects to the users. Here's an example of two of the request objects.
With Inheritance:
public abstract class BaseGetProductRequest { // the props here }
public class GetProductARequest : BaseGetProductRequest, IRequest<GetProductAResponse> { }
public class GetProductBRequest : BaseGetProductRequest, IRequest<GetProductBResponse> { }
public class GetProductAResponse { public ProductA[] Products {get; set;} }
Each of the above requests also has its own request handler class.
With using generics: (BaseProduct is a domain object class from which different product types are inherited)
public class GetProductRequest<TProductType> : IRequest<TProductType[]> where TProductType : BaseProductType { // all props in here }
Which will be used like this in an API:
public async Task<ProductA[]> Get([FromRoute] GetProductRequest<ProductA> request) { // API body }
So, the question is: Which one the following would be a better approach to take from a design point of view?
To take the Inheritance approach as above
Or to implement the requests and request handlers using generics so we'll end up with less files
Personally, I would prefer the first approach as I want to literally have separate request objects for each API, this way it looks cleaner to me besides that would be consistent with the rest of the code as well (as not all the request objects can be implemented generically). Besides, if by any chance in the future there is a need to add a type-specific property for a request object, then, our code will be more flexible with that change.
Is there like any specific design guidelines which recommends for example taking one over another? Thanks for your opinions in advance.
Sure the inheritance approach will give your project a higher performance, due to the re-usability of compiled requests
Let's say I have an interface like this:
public interface IUser
{
int Id { get; }
string Name { get; }
List<IMonthlyBudget> MonthlyBudget { get; }
}
and then I have a model that implements this:
public class User : IUser
{
public int Id { get; set; }
public string Name { get; set; }
public List<IMonthlyBudget> MonthlyBudget { get; set; }
}
and here I have the IMonthlyBudget:
public interface IMonthlyBudget
{
int Id { get; }
float MonthlyMax { get; }
float CurrentSpending { get; }
float MonthlyIncome { get; }
}
Now I have my models. But the issue comes with using SQLite. SQLite can't understand what is the real implementation of IMonthlyBudget. I understand why, but I really don't want remove the interface and expose the real implementation to all the clients that use these models. In my project structure I have a Core project that has all the model interfaces, and the model implementation are in a data access project.
Is there something wrong with how I'm approaching this problem? I assume i'm not the first one to run into a issue like this. Isn't it completely normal practice to keep model interfaces (what repositories etc then use as their return types, parameters and stuff like that) and implement the actual concrete models in a data access project?
And can someone explain why I can't do this:
public class User : IUser
{
public int Id { get; set; }
public string Name { get; set; }
public List<MonthlyBudget> MonthlyBudget { get; set; }
}
MonthlyBudget implements IMonthlyBudget, shouldn't it be completely fine to use the concrete model as the type instead of the the interface when the concrete model actually implements the interface?
A few questions here, so I'll break it down into sections:
Use of Interfaces
It is definitely good practice to interface classes that perform operations. For example, you may have a data service (i.e. data access layer) interface that allows you to do operations to read and modify data in your persistent store. However, you may have several implementations of that data service. One implementation may save to the file system, another to a DBMS, another is a mock for unit testing, etc.
However, in many cases you do not need to interface your model classes. If you're using an anemic business object approach (as opposed to rich business objects), then model classes in general should just be containers for data, or Plain Old CLR Objects (POCO). Meaning these objects don't have any real functionality to speak of and they don't reference any special libraries or classes. The only "functionality" I would put in a POCO is one that is dependent only upon itself. For example, if you have a User object that has a FirstName and LastName property, you could create a read-only property called FullName that returns a concatenation of the two.
POCOs are agnostic as to how they are populated and therefore can be utilized in any implementation of your data service.
This should be your default direction when using an anemic business object approach, but there is at least one exception I can think of where you may want to interface your models. You may want to support for example a SQLite data service, and a Realm (NoSQL) data service. Realm objects happen to require your models to derive from RealmObject. So, if you wanted to switch your data access layer between SQLite and Realm then you would have to interface your models as you are doing. I'm just using Realm as an example, but this would also hold true if you wanted to utilize your models across other platforms, like creating an observable base class in a UWP app for example.
The key litmus test to determining whether you should create interfaces for your models is to ask yourself this question:
"Will I need to consume these models in various consumers and will those consumers require me to define a specific base class for my models to work properly in those consumers?"
If the answer to this is "yes", then you should make interfaces for your models. If the answer is "no", then creating model interfaces is extraneous work and you can forego it and let your data service implementations deal with the specifics of their underlying data stores.
SQLite Issue
Whether you continue to use model interfaces or not, you should still have a data access implementation for SQLite which knows that it's dealing with SQLite-specific models and then you can do all your CRUD operations directly on those specific implementations of your model. Then since you're referring to a specific model implementation, SQLite should work as usual.
Type Compatibility
To answer your final question the type system does not see this...
List<IMonthlyBudget> MonthlyBudget
as being type-compatible with this...
List<MonthlyBudget> MonthlyBudget
In our minds it seems like if I have a list of apples, then it should be type-compatible with a list of fruit. The compiler sees an apple as a type of fruit, but not a list of apples as a type of a list of fruit. So you can't cast between them like this...
List<IMonthlyBudget> myMonthlyBudget = (List<IMonthlyBudget>) new List<MonthlyBudget>();
but you CAN add a MonthlyBudget object to a list of IMonthlyBudget objects like this...
List<IMonthlyBudget> myMonthlyBudget = new List<IMonthlyBudget>();
myMonthlyBudget.Add(new MonthlyBudget());
Also you can use the LINQ .Cast() method if you want to cast an entire list at once.
The reason behind this has to do with type variance. There's a good article on it here that can shed some light as to why:
Covariance and Contravariance
I hope that helps! :-)
This question already has answers here:
Is DataContract attributes required for WCF
(4 answers)
Closed 9 years ago.
I was wondering if there is any way to define a WCF Contract class without using the [DataContract] and [DataMember] annotation. The reason is that domain model we currently have is fairly clean so we would like to keep it this way. Whats the best practice here? Create a Transfer object and copy the domain model object into a transfer object (that has the required annotations and is the Contract transfered between Client and Server)? Or somehow not annotate the object model and specify the contract in a different way.
If you do not add any serialization attributes to your class, and use it as part of a WCF service contract method, WCF will use the default serialization rules to produce a data contract anyway. This means that the class will implicitly become a [DataContract] every public property that has both a get and set accessor will implicitly become a [DataMember].
The only time you need to apply the attributes is if you want to override the default behavior, e.g. hiding some attributes, applying namespaces, etc. It's generally considered good practice to do so anyway, because relying on the default behavior might get you in trouble later. (It also makes it explicit that your class is meant for use by WCF). But it's not strictly required, as long as the default behavior meets your needs.
In response to your follow-up:
As far as I know there's no completely external way to change the serialization behavior of the DataContractSerializer for a given class; every option requires at least some level of attribution on the class being serialized. As #Yair Nevet describes below, my preferred method for turning existing domain objects into data contracts is the MetadataType attribute.
Alternatively, you can bypass the whole issue by doing what you suggested in your question: don't serialize your domain objects, but create custom DTO objects and serialize them. I tend to do this whenever I'm using the Entity Framework, for example, because serializing those can be tricky. This is also a good approach to take if your domain objects have lots of behaviors built into them -- you get a clear separation of "data being passed around" vs. "objects participating in my business logic."
You often end up with lots of redundant code, but it does achieve your goal of zero changes to your existing objects.
You can use the MetadataType attribute and a metadata model class in order to separate the annotations from your model.
For example:
[MetadataType(typeof(MyModelMetadata))]
public class MyModel : MyModelBase {
... /* the current model code */
}
[DataContract]
public class MyModelMetadata {
[DataMember]
public string Name { get; set; }
}
WCF is capable of serializing your objects without the attributes. The attributes are there to allow for customization. For example, the two classes will serialize identically by the DataContractSerializer:
public class Customer
{
public string FirstName { get; set; }
public string LastName { get; set; }
}
[DataContract]
public class Customer
{
[DataMember] public string FirstName { get; set; }
[DataMember] public string LastName { get; set; }
}
It is worth mentioning that you really should mark your class with the attributes. They aren't as "messy" as you think. It will actually save you from headache in the future. For example:
[DataContract(Name = "Customer")]
public class Customer
{
[DataMember(Name = "FirstName")]
public string FirstName { get; set; }
[DataMember(Name = "LastName")]
public string LastName { get; set; }
}
In the previous code sample, I explicitly set the names of the class and members. This will allow me to refactor the names without breaking consumers code. So, if someone decides that my class should be named CustomerDetail instead of Customer, I can still leave the name as Customer so that consumers of my service continue to work.
You could always use DTOs. Make a separate class that has everything that is needed to serialize your objects. Then project your domain model on to the DTO. You could use something like AutoMapper to make this process a little easier.
Regarding Performance
Unless you have hundreds, probably thousands, or objects or a very large number of properties per class, the act of converting to and from DTOs probably isn't that much performance overhead.
If you are using something like EF, and you are not serializing every property, you might even be able to reduce some overhead by projecting your EF query directly on to your DTOs.
This is kind of a dramatic case, but I had (poorly designed) database models with 50+ properties per type. By changing to DTOs that only have the 10-15 properties I cared about, I was able to almost double the performance of a WCF service.
I'm starting to use ServiceStack to implement a web service API. I'm trying to follow the examples and best-practices as much as possible, but sometimes this is not that easy (it seems that many samples are not yet updated to follow the new API design).
What I currently have is something like this:
an assembly named MyApp.ServiceInterface containing the implementation of the services/methods
an assembly named MyApp.ServiceModel containing the request and response types and the DTOs
In the MyApp.ServiceModel assembly, I have for example:
namespace MyApp.ServiceModel
{
public abstract class ResponseBase
{
public ResponseStatus ResponseStatus { get; set; } // for error handling
}
[Route("/products/{Id}")] // GET: products/123
[Route("/products")] // GET: products?Name=...
public class ProductRequest : IReturn<ProductResponse>
{
public int Id { get; set; }
public string Name { get; set; }
}
public class ProductResponse : ResponseBase
{
public Types.Product Product { get; set; }
}
}
namespace MyApp.ServiceModel.Types
{
public class Product
{
public int Id { get; set; }
public string Name { get; set; }
// ...
}
}
Questions:
I have seen different ways of how to name the request types (e.g. GetProduct, ProductRequest or just Product). What is the recommended approach?
Does the naming somehow depend on whether the service is a REST-service or not?
Would it be a good idea to put the request and response types into seperate (sub-)namespaces (e.g. MyApp.ServiceModel.Requests and MyApp.ServiceModel.Responses)?
Why is the assembly containing the implementations named ServiceInterface (wouldn't ServiceImplementation fit better)?
API design is subjective so there's no recommended approach. Although I personally dislike appending 'Request' suffix on my Request DTOs since its effectively your Web Service Contract. I also dislike the use of inheritance in Service Models for trying to DRY properties which hides intent in your Service Layer which is your most important contract.
The name of the Request DTOs doesn't affect REST Apis with custom routes since there's no externally visible difference with different Request DTO's using the same custom route. Although it does affect the surface area when using the end-to-end typed clients since it forms the visible part of your typed API.
Here are a couple of answers which describe my preferences of how I would design service APIs:
Designing a REST-ful service with ServiceStack
How to design a Message-Based API
C# Namespaces in DTOs have no visible effect on your API. In ServiceStack Request DTOs map 1:1 with your Service so they must be unique which, if you append a 'Response' suffix for your Response DTOs, they will end up being unique as well. As a goal I ensure all my DTOs, both operations and types, are uniquely named so it doesn't matter what their physical layout is. As a convention I now like to place my operation DTOs (i.e. Request / Response) on the top-level of the Service Model assembly, with the Request / Response DTO in the same C# .cs file whilst all other 'DTO Types' in a Types folder, e.g:
/Products.cs (holds GetProduct and ProductResponse DTOs)
/Types/Product.cs
It's called Service Interface since it matches the Gateway Service pattern where your client is called a Client Gateway whilst your Server is called the Service Interface. The use of Interface here means service entry point and not C# Interface.
I am newbie to SOA though I have some experience in OOAD.
One of the guidelines for SOA design is “Use Abstract Classes for Modeling only. Omit them from Design”. The use of abstraction can be helpful in modeling (analysis phase).
During analysis phase I have come up with a BankAccount base class. The specialized classes derived from it are “FixedAccount” and “SavingsAccount”. I need to create a service that will return all accounts (list of accounts) for a user. What should be the structure of service(s) to meet the requirement?
Note: It would be great if you can provide code demonstration using WCF.
It sounds like you are trying to use SOA to remotely access your object model. You would be better of looking at the interactions and capabilities you want your service to expose and avoid exposing inheritance details of your services implementation.
So in this instance where you need a list of user accounts your interface would look something like
[ServiceContract]
interface ISomeService
{
[OperationContract]
Collection<AccountSummary> ListAccountsForUser(
User user /*This information could be out of band in a claim*/);
}
[DataContract]
class AccountSummary
{
[DataMember]
public string AccountNumber {get;set;}
[DataMember]
public string AccountType {get;set;}
//Other account summary information
}
if you do decide to go down the inheritance route, you can use the KnownType attribute, but be aware that this will add some type information into the message being sent across the wire which may limit your interoperability in some cases.
Update:
I was a bit limited for time earlier when I answered, so I'll try and elaborate on why I prefer this style.
I would not advise exposing your OOAD via DTOs in a seperate layer this usually leads to a bloated interface where you pass around a lot of data that isn't used and religously map it into and out of what is essentially a copy of your domain model with all the logic deleted, and I just don't see the value. I usually design my service layer around the operations that it exposes and I use DTOs for the definition of the service interactions.
Using DTOs based on exposed operations and not on the domain model helps keep the service encapsulation and reduces coupling to the domain model. By not exposing my domain model, I don't have to make any compromises on field visibility or inheritance for the sake of serialization.
for example if I was exposing a Transfer method from one account to another the service interface would look something like this:
[ServiceContract]
interface ISomeService
{
[OperationContract]
TransferResult Transfer(TransferRequest request);
}
[DataContract]
class TransferRequest
{
[DataMember]
public string FromAccountNumber {get;set;}
[DataMember]
public string ToAccountNumber {get;set;}
[DataMember]
public Money Amount {get;set;}
}
class SomeService : ISomeService
{
TransferResult Transfer(TransferRequest request)
{
//Check parameters...omitted for clarity
var from = repository.Load<Account>(request.FromAccountNumber);
//Assert that the caller is authorised to request transfer on this account
var to = repository.Load<Account>(request.ToAccountNumber);
from.Transfer(to, request.Amount);
//Build an appropriate response (or fault)
}
}
now from this interface it is very clear to the conusmer what the required data to call this operation is. If I implemented this as
[ServiceContract]
interface ISomeService
{
[OperationContract]
TransferResult Transfer(AccountDto from, AccountDto to, MoneyDto dto);
}
and AccountDto is a copy of the fields in account, as a consumer, which fields should I populate? All of them? If a new property is added to support a new operation, all users of all operations can now see this property. WCF allows me to mark this property as non mandatory so that I don't break all of my other clients, but if it is mandatory to the new operation the client will only find out when they call the operation.
Worse, as the service implementer, what happens if they have provided me with a current balance? should I trust it?
The general rule here is to ask who owns the data, the client or the service? If the client owns it, then it can pass it to the service and after doing some basic checks, the service can use it. If the service owns it, the client should only pass enough information for the service to retrieve what it needs. This allows the service to maintain the consistency of the data that it owns.
In this example, the service owns the account information and the key to locate it is an account number. While the service may validate the amount (is positive, supported currency etc.) this is owned by the client and therefore we expect all fields on the DTO to be populated.
In summary, I have seen it done all 3 ways, but designing DTOs around specific operations has been by far the most successful both from service and consumer implementations. It allows operations to evolve independently and is very explicit about what is expected by the service and what will be returned to the client.
I would go pretty much with what others have said here, but probably needs to add these:
Most SOA systems use Web Services for communication. Web Services expose their interface via WSDL. WSDL does not have any understanding of inheritance.
All behaviour in your DTOs will be lost when they cross the wire
All private/protected fields will be lost when they cross the wire
Imagine this scenario (case is silly but illustrative):
public abstract class BankAccount
{
private DateTime _creationDate = DateTime.Now;
public DateTime CreationDate
{
get { return _creationDate; }
set { _creationDate = value; }
}
public virtual string CreationDateUniversal
{
get { return _creationDate.ToUniversalTime().ToString(); }
}
}
public class SavingAccount : BankAccount
{
public override string CreationDateUniversal
{
get
{
return base.CreationDateUniversal + " UTC";
}
}
}
And now you have used "Add Service Reference" or "Add Web Reference" on your client (and not re-use of the assemblies) to access the the saving account.
SavingAccount account = serviceProxy.GetSavingAccountById(id);
account.CreationDate = DateTime.Now;
var creationDateUniversal = account.CreationDateUniversal; // out of sync!!
What is going to happen is the changes to the CreationDate will not be reciprocated to the CreationDateUniversal since there is no implementation crossed the wire, only the value of CreationDateUniversal at the time of serialization at the server.