So the app I'm currently working with is going to be doing a lot of requests that all return json, would it be appropriate to create a class that holds the properties for each specific request thats going to return the json that can be converted into the properties of that class?
For example, say I have 2 requests, one that returns a firstname, a surname and a job role, and I have another request that returns a business name, business location and business postcode, would it be OK to do:
class BusinessRetrieval {
public string BusinessLocation{ get; set; }
public string BusinessPostCode { get; set; }
}
class EmployeeRetrieval {
public string Firstname { get; set; }
public string Surname { get; set;}
public string Postcode { get; set; }
}
So now I have 2 classes that outline the properties that are going to be sent back once the request is made, now is it OK to just do:
BusinessRetrieval business = (BusinessRetrieval)JsonConvert.DeserializeObject(businessResponse, typeof(BusinessRetrieval));
EmployeeRetrieval employee = (EmployeeRetrieval)JsonConvert.DeserializeObject(employeeResponse, typeof(EmployeeRetrieval));
What I'm asking here is this an OK to go around doing this? I'm going to be dealing with a lot of requests (10-15) and I plan on making a class for each that outline the properties that each response will give back, I feel as if this would be a nice way to structure it.
Is this OK?
Yes it is the only reasonable way to handle it to make your code type safe.
Though you cannot cast the result of the non generic DeserializeObject to the type of your choice - it will throw an Exception.
Instead use the Generic version of DeserializeObject:
BusinessRetrieval business = JsonConvert.DeserializeObject<BusinessRetrieval>(businessResponse);
EmployeeRetrieval employee = JsonConvert.DeserializeObject<EmployeeRetrieval>(employeeResponse);
I think it is not only okay to do, I think it would be a best practice so you can pass that object through any methods without any problems.
Related
Is it a good practice to use multiple DTO's for same entity in different API endpoints. For example:
I have a api endpoint which accpets the following Dto:
public class AddressDto
{
public string City { get; set; }
public string Country { get; set; }
public string Contact { get; set; }
public string Street1 { get; set; }
public string Street2 { get; set; }
public string State { get; set; }
public string Zip { get; set; }
}
And now there is second Api which accepts the same dto but in that api call I'm using only Streer1, Street2, Contact all other are ignored.
Should I make another DTO for second api endpoint like:
public class AddressDtoForSecondAPI
{
public string Contact { get; set; }
public string Street1 { get; set; }
public string Street2 { get; set; }
}
In short, yes it is acceptable.
However, as you can see in the comments and the other answer, not everyone agrees here. So let me explain my answer.
Argument 1 - Misleading the consumer
And now there is second Api which accepts the same dto but in that api call I'm using only Streer1, Street2, Contact all other are ignored.
The issue here is one of making your intentions clear. If you allow a consumer to send you a fully fleshed AddressDTO, but then only use a subset of properties, then you're misleading your consumer. You've made them think that the other properties are relevant.
This is effectively the same as:
public int AddNumbersTogether(int a, int b, int c, int d)
{
return a + c + d; //we ignore b
}
There is no reason for b to exist. Anyone who uses this method is going to be scratching their head when AddNumbersTogether(1,2,3,4) returns a value of 8. The syntax contradicts the behavior.
Yes, it's easier to omit an unused method parameter than it is to develop a second DTO. But you need to be consistent here and stick to the same principle: not misleading the consumer.
Argument 2 - A DTO is not an entity
Your consumer's interaction with your API(s) needs to happen without the consumer knowing anything about the structure of your database records.
This is why you're using a DTO and not your entity class to begin with! You're providing a logical separation between taking an action and storing the data of that action.
The consumer doesn't care where the data is stored. Regardless of whether you store the street in the same table as the address, or a diferent table (or database) altogether, does not matter in scope of the consumer calling an API method.
Argument 3 - Countering S.Akbari
What about inheritance and/or interface segregation principle in SOLID? – S.Akbari
These are not valid arguments for this particular case.
Inheritance is a flawed approach. Yes, you can technically get away with doing something like AddressDto : AddressDtoForSecondAPI in the posted example code, but this is a massive code smell.
What happens when a third DTO is needed, e.g. one where only zip codes and city names are used? You can't have AddressDto inherit from multiple sources, and there is no logical overlap between AddressDtoForSecondAPI and the newly created AddressDtoForThirdAPI.
Interfaces are not the solution here. Yes, you could technically created an IAddressDtoForSecondAPI and IAddressDtoForThirdAPI interface with the appropriate fields, and then do something like AddressDto : IAddressDtoForSecondAPI, IAddressDtoForThirdAPI. However, this is the same massive code smell again.
What happens if the second and third variation have a few shared properties, and a few individual ones? If you apply interface segregation, then the overlapping properties need to be abstracted in an interface by themselves.
If then a fourth variation presents itself, which has some properties in common with the second variation, some with the third variation, some with both the second AND third variation, and some individual properties, then you're going to need to create even more interfaces!
Given enough variations of the same entity and repeatedly applying the interface segregation principle; you're going to end up with an interface for every property of the entity; which requires a ridiculous amount of boilerplating. You'll end up with something like:
public class AddressDto : IAddressCity, IAddressCountry, IAddressContact, IAddressStreet1, IAddressStreet2, IAddressState, IAddressZip
{
public string City { get; set; }
public string Country { get; set; }
public string Contact { get; set; }
public string Street1 { get; set; }
public string Street2 { get; set; }
public string State { get; set; }
public string Zip { get; set; }
}
Imagine having to do this for all classes; since the same principle would apply to every DTO that is being used by the API.
Argument 4 - DRY does not apply here
I sort of get why you're apprehensive of creating two classes. Most likely, there's a DRY/WET error flag being raised in your mind.
Avoiding WET is a good reflex to have; but you can't always listen to it. Because if you were to really avoid duplication, then you should effectively also not create separate entity and DTO classes, as they are usually copy/pastes of each other.
DRY is not an absolute. Taking the entity/DTO example, there is a balance of considerations here:
Do you want avoid repetition at all costs? (= DRY)
Do you want to separate your DAL from your API logic? (= separation of concerns)
In this case, the latter generally wins out.
The same argument applies in your case. The argument against following DRY (which is the arguments I just listed) far outweighs the benefits of following DRY in this scenario.
We consume a WCF service using C# code. The client was generated in Visual Studio by right-clicking "Add Service Reference" and pointing it at the WSDL.
Recently, the WCF provider adding some properties to one of the objects they serialize. The class went from
public class MyClass
{
public string Foo { get; set; }
public string Baz { get; set; }
public string Zed {get; set; }
}
to this:
public class MyClass
{
public string Foo { get; set; }
public string Bar { get; set; } //<= New Property
public string Baz { get; set; }
public string Zed {get; set; }
}
On our end, this caused Baz and Zed to suddenly start being null when deserialized, until we updated the service reference. In fact, the real object had some ~20 properties alphabetically after Bar, and they were all null (or 0 for ints, false for bools, etc).
It seems an odd way for the deserialization to fail. It didn't throw an exception or ignore the new properties it didn't know anything about.... it just made every property that appeared alphabetically after the new one deserialize to the default value.
So my question is, what's going on here and how do I prevent it? Preferably, I'd like some kind of setting for the client to tell it to "ignore new properties," but telling the service provider how they can prevent future breaking changes would be fine too.
MSDN has an article which lists the serialization ordering of the datamembers. One key point from that document:
current type’s data members that do not have the Order property of the
DataMemberAttribute attribute set, in alphabetical order.
So if you add a new property, without the Order-property of the DataMemberAttribute, the property is alphabetically ordered.
Based on discussion here, your only options are:
Change the serializer to something else
Make sure that the order of the elements in XML matches the order of your properties. Maybe you can always use the Order-property of the DataMemberAttribute?
Make sure that your dll's line up, I've seen some pretty funky issues in the past where one side of a service was pointing to an outdated dll
also remember the fundamentals of data contracts
I am working with Kinect for Windows version 2 and meet a problem. I try to serialize the Body object and send it through the Internet. However Body object is not sterilisable. Although I can extract some key information from a Body object and create my own object, I may lose some information. My question is how to clone all information from a Body object to my own serializable object?
Thank you.
If cloning is what you're concerned with, use AutoMapper.
First you'll need to install AutoMapper using NuGet...
PM> Install-Package AutoMapper
Then check out this example and adapt it to your own needs...
void Main()
{
AutoMapper.Mapper.CreateMap<User, MyUser>()
.ForMember(myUsers => myUsers.Name, users => users.MapFrom(property => string.Format("{0} {1}",property.FirstName, property.LastName)));
User user = new User
{
FirstName = "James",
LastName = "Doe",
DateOfBirth = DateTime.UtcNow
};
MyUser myUser = AutoMapper.Mapper.Map<MyUser>(user);
}
public class MyUser
{
public string Id { get; set; }
public string Name { get; set; }
public DateTime DateOfBirth { get; set; }
}
public class User
{
public User()
{
this.Id = Guid.NewGuid().ToString();
}
public string Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public DateTime DateOfBirth { get; set; }
}
In the example above, AutoMapper figures out that it can map the Id property of MyUser and User class because they're named identically, however we needed to create a custom map to map User.FirstName and User.LastName to MyUser.Name property.
If the purpose of serialization is to reconstruct it at the other end, the first thing you need to determine whether the constructor and setters exist for you to create an equivalent on the other side. If it is purely for an independent representation that your server side needs to interact with, you have a much simpler task.
My recommendation would be to inspect the body object both via the public interface available through documentation and via reflection in the debugger to determine what data you can and want to extract and build a custom, serializable class based on that hierarchical model.
If all the data you need to extract is publicly accessible, simply writer a builder class that takes the body object as its input and constructs your custom class as the output. If it's not publicly accessible, you may need to use reflection to explore the pieces you need. I would advise the reflection code to be manually built as to avoid cycles in the object graph that may exist in a private class as this.
It's alright. Just do the following logic:
Use reflection to loop through the properties of the object you wanna clone.
You can setup the data with your predefined custom class. (Perhaps you might want to generate the XML schema according to the object's properties and from there you create your own predefined custom class).
Hope this concept helps. If not, let's discuss further.
I've been working on a project for a while to parse a list of entries from a csv file and use that data to update a database.
For each entry I create a new user instance that I put in a collection. Now I want to iterate that collection and compare the user entry to the user from the database (if it exists). My question is, how can I compare that user (entry) object to the user (db) object, while returning a list with differences?
For example following classes generated from database:
public class User
{
public int ID { get; set; }
public string EmployeeNumber { get; set; }
public string UserName { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public Nullable<int> OfficeID { get; set; }
public virtual Office Office { get; set; }
}
public class Office
{
public int ID { get; set; }
public string Code { get; set; }
public virtual ICollection<User> Users { get; set; }
}
To save some queries to the database, I only fill the properties that I can retrieve from the csv file, so the ID's (for example) are not available for the equality check.
Is there any way to compare these objects without defining a rule for each property and returning a list of properties that are modified? I know this question seems similar to some earlier posts. I've read a lot of them but as I'm rather inexperienced at programming, I'd appreciate some advice.
From what I've gathered from what I've read, should I be combining 'comparing properties generically' with 'ignoring properties using data annotations' and 'returning a list of CompareResults'?
There are several approaches that you can solve this:
Approach #1 is to create separate DTO-style classes for the contents of the CSV files. Though this involves creating new classes with a lot of similar fields, it decouples the CSV file format from your database and gives you the ability to change them later without influencing the other part. In order to implement the comparison, you could create a Comparer class. As long as the classes are almost identical, the comparison can get all the properties from the DTO class and implement the comparison dynamically (e.g. by creating and evaluating a Lambda expression that contains a BinaryExpression of type Equal).
Approach #2 avoids the DTOs, but uses attributes to mark the properties that are part of the comparison. You'd need to create a custom attribute that you assign to the properties in question. In the compare, you analyze all the properties of the class and filter out the ones that are marked with the attribute. For the comparison of the properties you can use the same approach as in #1. Downside of this approach is that you couple the comparison logic tightly with the data classes. If you'd need to implement several different comparisons, you'd clutter the data classes with the attributes.
Of course, #1 results in a higher effort than #2. I understand that it is not what you are looking for, but maybe having a separate, strongly-typed compared class is also an approach one can think about.
Some more details on a dynamic comparison algorithm: it is based on reflection to get the properties that need to be compared (depending on the approach you get the properties of the DTO or the relevant ones of the data class). Once you have the properties (in case of DTOs, the properties should have the same name and data type), you can create a LamdaExpression and compile and evaluate it dynamically. The following lines show an excerpt of a code sample:
public static bool AreEqual<TDTO, TDATA>(TDTO dto, TDATA data)
{
foreach(var prop in typeof(TDTO).GetProperties())
{
var dataProp = typeof(TDATA).GetProperty(prop.Name);
if (dataProp == null)
throw new InvalidOperationException(string.Format("Property {0} is missing in data class.", prop.Name));
var compExpr = GetComparisonExpression(prop, dataProp);
var del = compExpr.Compile();
if (!(bool)del.DynamicInvoke(dto, data))
return false;
}
return true;
}
private static LambdaExpression GetComparisonExpression(PropertyInfo dtoProp, PropertyInfo dataProp)
{
var dtoParam = Expression.Parameter(dtoProp.DeclaringType, "dto");
var dataParam = Expression.Parameter(dataProp.DeclaringType, "data");
return Expression.Lambda(
Expression.MakeBinary(ExpressionType.Equal,
Expression.MakeMemberAccess(
dtoParam, dtoProp),
Expression.MakeMemberAccess(
dataParam, dataProp)), dtoParam, dataParam);
}
For the full sample, see this link. Please note that this dynamic approach is just an easy implementation that leaves room for improvement (e.g. there is no check for the data type of the properties). It also does only check for equality and does not collect the properties that are not equal; but that should be easy to transfer.
While the dynamic approach is easy to implement, the risk for runtime errors is bigger than in a strongly-typed approach.
I have a need to parse (and build) fixed length text based messages that may in some cases contain array fields.
Example:
PARTA LOTA 02SUBLOT1 SUBLOT2 03TEST1 RESULT1 TEST2 RESULT2 TEST3 RESULT3
If this were an object, it might use the Lot object below.
Part Number (PARTA)
Lot Number (LOTA)
An Array of 2 SubLot Objects (SUBLOT1 with quantity 150 and SUBLOT2 with Quantity 999)
An Array of 3 Test Results (TEST1 with result 1234.67890, ...)
Note that the number of array items is specified in the message.
I was hoping to use the FileHelpers library that I've seen people talking about, but it doesn't appear to support multiple array fields where there is another field specifying the quantity, and it doesn't support field types that themselves have the attribute of [FixedLengthRecord()].
This is what I would like to be able to do. Note that the field length of 10 is just an artifact of keeping this simple. Not all fields would normally be defined with the same length.
[FixedLengthRecord()]
public class Lot
{
[FieldFixedLength(10)]
public string PartNumber { get; set; }
[FieldFixedLength(10)]
public string LotNumber { get; set; }
[FieldFixedLength(10)]
public SubLot[] SubLots { get; set; }
[FieldFixedLength(10)]
public Test[] Tests { get; set; }
}
[FixedLengthRecord()]
public class SubLot
{
[FieldFixedLength(10)]
public string SubLotNumber { get; set; }
[FieldFixedLength(10)]
public int Quantity { get; set; }
}
[FixedLengthRecord()]
public class Test
{
[FieldFixedLength(10)]
public string Description { get; set; }
[FieldFixedLength(10)]
public double Result { get; set; }
}
Anyone have any idea if this is possible with FileHelpers? Any other ideas? I have many different message types so I would rather not manually code for each one. The attribute decoration method in FileHelpers seems like a great clean solution and I'm considering just extending it, but I want to make sure I'm not missing a better solution out there.
I believe that I done something very similar in the past.
The way that I tackled this issue is to use custom attributes. This allowed me to create classes and nested objects which described my data exactly as described in the specification and use custom attributes to describe the data attributes (lenght, type, padding requirements, if required etc).
I also ended up writing a custom serialization/deserialization for the classes and attributes however that was really specific to the actual application as the data was coming through a custom government protocol which also sent and received data in fixed sized chunks or packets over encrypted sockets with continuation codes etc.
Tutorials
http://msdn.microsoft.com/en-us/library/aa288454%28v=vs.71%29.aspx
http://www.codeproject.com/KB/cs/attributes.aspx
http://www.devx.com/dotnet/Article/11579