Multiple Dtos for same entity - c#

Is it a good practice to use multiple DTO's for same entity in different API endpoints. For example:
I have a api endpoint which accpets the following Dto:
public class AddressDto
{
public string City { get; set; }
public string Country { get; set; }
public string Contact { get; set; }
public string Street1 { get; set; }
public string Street2 { get; set; }
public string State { get; set; }
public string Zip { get; set; }
}
And now there is second Api which accepts the same dto but in that api call I'm using only Streer1, Street2, Contact all other are ignored.
Should I make another DTO for second api endpoint like:
public class AddressDtoForSecondAPI
{
public string Contact { get; set; }
public string Street1 { get; set; }
public string Street2 { get; set; }
}

In short, yes it is acceptable.
However, as you can see in the comments and the other answer, not everyone agrees here. So let me explain my answer.
Argument 1 - Misleading the consumer
And now there is second Api which accepts the same dto but in that api call I'm using only Streer1, Street2, Contact all other are ignored.
The issue here is one of making your intentions clear. If you allow a consumer to send you a fully fleshed AddressDTO, but then only use a subset of properties, then you're misleading your consumer. You've made them think that the other properties are relevant.
This is effectively the same as:
public int AddNumbersTogether(int a, int b, int c, int d)
{
return a + c + d; //we ignore b
}
There is no reason for b to exist. Anyone who uses this method is going to be scratching their head when AddNumbersTogether(1,2,3,4) returns a value of 8. The syntax contradicts the behavior.
Yes, it's easier to omit an unused method parameter than it is to develop a second DTO. But you need to be consistent here and stick to the same principle: not misleading the consumer.
Argument 2 - A DTO is not an entity
Your consumer's interaction with your API(s) needs to happen without the consumer knowing anything about the structure of your database records.
This is why you're using a DTO and not your entity class to begin with! You're providing a logical separation between taking an action and storing the data of that action.
The consumer doesn't care where the data is stored. Regardless of whether you store the street in the same table as the address, or a diferent table (or database) altogether, does not matter in scope of the consumer calling an API method.
Argument 3 - Countering S.Akbari
What about inheritance and/or interface segregation principle in SOLID? – S.Akbari
These are not valid arguments for this particular case.
Inheritance is a flawed approach. Yes, you can technically get away with doing something like AddressDto : AddressDtoForSecondAPI in the posted example code, but this is a massive code smell.
What happens when a third DTO is needed, e.g. one where only zip codes and city names are used? You can't have AddressDto inherit from multiple sources, and there is no logical overlap between AddressDtoForSecondAPI and the newly created AddressDtoForThirdAPI.
Interfaces are not the solution here. Yes, you could technically created an IAddressDtoForSecondAPI and IAddressDtoForThirdAPI interface with the appropriate fields, and then do something like AddressDto : IAddressDtoForSecondAPI, IAddressDtoForThirdAPI. However, this is the same massive code smell again.
What happens if the second and third variation have a few shared properties, and a few individual ones? If you apply interface segregation, then the overlapping properties need to be abstracted in an interface by themselves.
If then a fourth variation presents itself, which has some properties in common with the second variation, some with the third variation, some with both the second AND third variation, and some individual properties, then you're going to need to create even more interfaces!
Given enough variations of the same entity and repeatedly applying the interface segregation principle; you're going to end up with an interface for every property of the entity; which requires a ridiculous amount of boilerplating. You'll end up with something like:
public class AddressDto : IAddressCity, IAddressCountry, IAddressContact, IAddressStreet1, IAddressStreet2, IAddressState, IAddressZip
{
public string City { get; set; }
public string Country { get; set; }
public string Contact { get; set; }
public string Street1 { get; set; }
public string Street2 { get; set; }
public string State { get; set; }
public string Zip { get; set; }
}
Imagine having to do this for all classes; since the same principle would apply to every DTO that is being used by the API.
Argument 4 - DRY does not apply here
I sort of get why you're apprehensive of creating two classes. Most likely, there's a DRY/WET error flag being raised in your mind.
Avoiding WET is a good reflex to have; but you can't always listen to it. Because if you were to really avoid duplication, then you should effectively also not create separate entity and DTO classes, as they are usually copy/pastes of each other.
DRY is not an absolute. Taking the entity/DTO example, there is a balance of considerations here:
Do you want avoid repetition at all costs? (= DRY)
Do you want to separate your DAL from your API logic? (= separation of concerns)
In this case, the latter generally wins out.
The same argument applies in your case. The argument against following DRY (which is the arguments I just listed) far outweighs the benefits of following DRY in this scenario.

Related

What is the design pattern to use for extending a base class with additional properties?

I have "business entities" and their counterpart for saving them to Azure Storage Table, which requires a few additional properties.
// MyData is the business entity with a few properties
public record MyData_AzureTable : MyData, ITableEntity
{
// Required properties for storing data to Azure Storage Table
public string PartitionKey { get; set; } = "";
public string RowKey { get; set; } = "";
public DateTimeOffset? Timestamp { get; set; }
public ETag ETag { get; set; } = new ETag();
}
I am getting tired of having to duplicate each business entity with its AzureTable counterpart but I can't find the correct pattern to use. Something like that, except it's illegal to inherit from a type parameter.
public record AzureTable<T> : T, ITableEntity
{
public string PartitionKey { get; set; } = "";
public string RowKey { get; set; } = "";
public DateTimeOffset? Timestamp { get; set; }
public ETag ETag { get; set; } = new ETag();
}
What pattern should be used for adding properties to a base class?
The object saved to Azure Table Storage needs to be "flat" (tabular data as property values, no hierarchical data or encapsulation)
Not necessarily a pattern but abstract classes may fit well for your need here. Check out the docs: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/abstract
In conclusion, those classes have a base one (which would be your always default AZ properties) and all the other classes than inherit from the abstract one will contain those properties as well without needing implementation (like an interface would) but you can extend the children and add more custom properties to each one.
There is no such pattern, currently C# does not support something like traits (though in some cases similar can be achieved with default interface member implementations) and/or multiple inheritance (basically MyData_AzureTable should inherit from both MyData and AzureTable). If you are really tired of writing "duplicates" for data - you should consider using source generators - you can write quite a simple one which will generate azure tables classes for all required classes (for example marked with special attribute like GenerateAzureTable). Potentially it can generate also some useful methods for mapping, copy constructors and so on.

Is it a good idea to model keywords as value object in Product aggregate?

I have product aggregate which has several keywords to help for searching products. I have modeled it as following:
public class Product : Entity<Guid,Product> , IAggregateRoot
{
public Guid AccountId { get; protected set; }
public string Title { get; protected set; }
public DateTimeOffset AddingDate { get; protected set; }
public decimal Price { get; protected set; }
public string Brand { get; protected set; }
public string Description { get; protected set; }
public IList<Keyword> Keywords { get; protected set; }
}
public class Keyword : ValueObject<Keyword>
{
public Keyword(string title)
{
this.Title = title;
}
public string Title { get; protected set; }
}
Depending on entity-vs-value-object-the-ultimate-list-of-differences Value objects have several characteristics:
1.Two value objects considered equal if they have the same attribute values.
2.Value objects have zero lifespan.
3.Value objects are immutable.
But for searching purposes I will store keywords as many-to-many with product table as recommended Here instead of comma-separated string.
So I am aiming to model keyword as value object because I don't care about its identity(whether it's auto generated integer or Guid) and for marking two keyword as equal by their attribute(which is here Title).
My question is: Should I model keyword as value object or entity depending on the scenario above and why?
Edit:
Depending on the article I provided above:
Don’t introduce separate tables for value objects, just inline them into the parent entity’s table.
Keyword should be considered as Entity(But I think that domain and database models should not depend on each other)
I don't think I'd bother even having the keywords in the domain.
They seem to be a classifier of sorts (in quite a conceptual sense). If I did have it in the domain I'd have a simple string list. But again, those keywords probably have very little business value and probably don't have any rules associated with them. I'm guessing they aid in searching for a particular product.
You may want to "manage" the keywords separately on your UI anyway.
You could take this even further and quite easily have a generic (perhaps sub-domain) Tag/Keyword "repository" where any Id (say a Guid) can have a list of keywords or tags. In this way you could associate keywords with anything really.
To take this to the extreme a generic classification system may even be useful... but that is another topic :)
You could define predicates as arguments for searching methods. And put predicates regarding with an aggregate in the module of the aggregate (module as a DDD concept), I.e. java package, or namespace in your case (not sure if in Microsoft terminology it is called namespace, I come from java world).
Anyway if you don't use predicates, you can create a value object with the searching keywords (like a DTO), and it would live as well in the same module than the aggregate.
Hope this explanation helps.

Deserializing Json into objects

So the app I'm currently working with is going to be doing a lot of requests that all return json, would it be appropriate to create a class that holds the properties for each specific request thats going to return the json that can be converted into the properties of that class?
For example, say I have 2 requests, one that returns a firstname, a surname and a job role, and I have another request that returns a business name, business location and business postcode, would it be OK to do:
class BusinessRetrieval {
public string BusinessLocation{ get; set; }
public string BusinessPostCode { get; set; }
}
class EmployeeRetrieval {
public string Firstname { get; set; }
public string Surname { get; set;}
public string Postcode { get; set; }
}
So now I have 2 classes that outline the properties that are going to be sent back once the request is made, now is it OK to just do:
BusinessRetrieval business = (BusinessRetrieval)JsonConvert.DeserializeObject(businessResponse, typeof(BusinessRetrieval));
EmployeeRetrieval employee = (EmployeeRetrieval)JsonConvert.DeserializeObject(employeeResponse, typeof(EmployeeRetrieval));
What I'm asking here is this an OK to go around doing this? I'm going to be dealing with a lot of requests (10-15) and I plan on making a class for each that outline the properties that each response will give back, I feel as if this would be a nice way to structure it.
Is this OK?
Yes it is the only reasonable way to handle it to make your code type safe.
Though you cannot cast the result of the non generic DeserializeObject to the type of your choice - it will throw an Exception.
Instead use the Generic version of DeserializeObject:
BusinessRetrieval business = JsonConvert.DeserializeObject<BusinessRetrieval>(businessResponse);
EmployeeRetrieval employee = JsonConvert.DeserializeObject<EmployeeRetrieval>(employeeResponse);
I think it is not only okay to do, I think it would be a best practice so you can pass that object through any methods without any problems.

An alternative lookup table approach needed to make C# models more generic

I currently have the following Models in my EF Code First MVC project (edited for brevity):
public class Car
{
public int Id { get; set; }
public string Descrip { get; set; }
// Navigation Property.
public virtual CarColour CarColour { get; set; }
... + numerous other navigation properties.
}
public class CarColour
{
public int Id { get; set; }
public string ColourName { get; set; }
}
The CarColour table in the DB contains many rows.
In my project, I have about 10 of these sorts of tables, which are essentially lookup tables.
Rather than have 10 lookup tables (and 10 corresponding 'hard' types in code), I was tasked with implementing a more re-usable approach, instead of having loads of lookup tables, specific to Car (in this example), along the lines of having a couple of tables, one of which may hold the item types (colour, fuel-type etc.) and one which contains the various values for each of the types. The idea being that our model will be able to be re-used by many other projects - some of which will have potentially hundreds of different attributes, and as such, we won't want to create a new Class/Type in code and generate a new lookup table for each.
I am having difficulty in understanding the c# implementation of this sort of approach and hope someone may be able to give me an example of how this can be achieved in code, more specifically, how the above models would need to change, and what additional classes would be required to accomplish this?
your base entity must implement INotifyPropertyChanged and make it generic:
public virtual CarColour CarColour {
Get { return this.carColour; }
Set {
this.Carcolour; = value
OnPropertyChanged("CarColour");
}
}
For more info see :
patterns & practices: Prism in CodePlex.
http://compositewpf.codeplex.com/wikipage?title=Model%20View%20ViewModel%20(MVVM)
Greetings
Bassam
This is not necessarily specific to EF but I've been down this road and didn't really enjoy it.
I wanted to use a single table to represent 'generic' information and while I thought it was smart, it soon showed it's limitations. One of them being the complexity you need to introduce when writing queries to extract this data if you're performing more than just 'get colours for this car'.
I'd say, if your data is simple key/value and the value type is always going to be the same then go for it, it might even be worth having this a mere 'meta-data' for an object:
public class Car
{
public int Id { get; set; }
public string Descrip { get; set; }
public MetaData CarColours { get; set; }
}
public MetaData : Dictionary<int, string>
{
public MetaData(int group){}
}
Hypothetical table:
TableMetaData(int metaGroup, int metaId, string metaValue)
If you're hoping to store different types as your value and may need to perform joining on this data - avoid it and be a bit more specific.

Parsing messages with variable length arrays of fixed length fields

I have a need to parse (and build) fixed length text based messages that may in some cases contain array fields.
Example:
PARTA LOTA 02SUBLOT1 SUBLOT2 03TEST1 RESULT1 TEST2 RESULT2 TEST3 RESULT3
If this were an object, it might use the Lot object below.
Part Number (PARTA)
Lot Number (LOTA)
An Array of 2 SubLot Objects (SUBLOT1 with quantity 150 and SUBLOT2 with Quantity 999)
An Array of 3 Test Results (TEST1 with result 1234.67890, ...)
Note that the number of array items is specified in the message.
I was hoping to use the FileHelpers library that I've seen people talking about, but it doesn't appear to support multiple array fields where there is another field specifying the quantity, and it doesn't support field types that themselves have the attribute of [FixedLengthRecord()].
This is what I would like to be able to do. Note that the field length of 10 is just an artifact of keeping this simple. Not all fields would normally be defined with the same length.
[FixedLengthRecord()]
public class Lot
{
[FieldFixedLength(10)]
public string PartNumber { get; set; }
[FieldFixedLength(10)]
public string LotNumber { get; set; }
[FieldFixedLength(10)]
public SubLot[] SubLots { get; set; }
[FieldFixedLength(10)]
public Test[] Tests { get; set; }
}
[FixedLengthRecord()]
public class SubLot
{
[FieldFixedLength(10)]
public string SubLotNumber { get; set; }
[FieldFixedLength(10)]
public int Quantity { get; set; }
}
[FixedLengthRecord()]
public class Test
{
[FieldFixedLength(10)]
public string Description { get; set; }
[FieldFixedLength(10)]
public double Result { get; set; }
}
Anyone have any idea if this is possible with FileHelpers? Any other ideas? I have many different message types so I would rather not manually code for each one. The attribute decoration method in FileHelpers seems like a great clean solution and I'm considering just extending it, but I want to make sure I'm not missing a better solution out there.
I believe that I done something very similar in the past.
The way that I tackled this issue is to use custom attributes. This allowed me to create classes and nested objects which described my data exactly as described in the specification and use custom attributes to describe the data attributes (lenght, type, padding requirements, if required etc).
I also ended up writing a custom serialization/deserialization for the classes and attributes however that was really specific to the actual application as the data was coming through a custom government protocol which also sent and received data in fixed sized chunks or packets over encrypted sockets with continuation codes etc.
Tutorials
http://msdn.microsoft.com/en-us/library/aa288454%28v=vs.71%29.aspx
http://www.codeproject.com/KB/cs/attributes.aspx
http://www.devx.com/dotnet/Article/11579

Categories