Is it a good idea to make service calls during deserialization? - c#

In the Newtonsoft docs for CustomCreationConverter, it says:
"A more complicated scenario could involve an object factory or service locator that resolves the object at runtime."
I'm dealing with such a scenario where I have to hydrate a persistent object with an incoming DTO (which is in JSON). To hydrate the persistent object, I have to first load it using a service call. I can make this call during deserialization using CustomCreationConverter. Or I can consume the JSON as an ExpandoObject and transitively assign all members. Complexity is not much as the object graph is small.
Something about making service calls during deserialization does not seem right. It makes testing more complicated as I would have to first load that object into memory to be able to deserialize it. This also screams tight coupling.
So my question is: Is it a good idea to make service calls during deserialization in this scenario?

As I understand it, you are doing this steps:
Call the deserialization
In a CustomCreationConverter you retrieve a pre-populated object instance via a remote Service
Json.Net does it's thing on the retrieved instance.
Well, it seems to me you could make use of the PopulateObject method, like so:
var obj = RemoteService.Retrieve(id);
Newtonsoft.Json.JsonConvert.PopulateObject(jsonString, obj);
This way you keep your code simple (albeit less fun) and testable.

Related

Handling heterogeneous JSON in C#

A project I am working on is consuming Web API's returning heterogeneous JSON, I don't understand why a webservice API would need to return different object graphs, but that is what is being returned.
My questions are
Is it usual/common practice to return different object graphs by the same api? By different object graph I mean a varying complex object that might have or not have some other complex objects as properties. It would have seem reasonable if the same properties returned for every call, either having a null value or a complex object as their values, but the properties being completely omitted in the response makes it hard to have a C# class to desrialise against.
How is JSON heterogeneous (de)serialisation handled in C#? Is reflection and run time code generation preferred method for this? or using dynamic/expando object?
It makes sense for some APIs to return different objects when some fields that happen to be complex objects are omitted because they do not contain information sometimes, or an endpoint returns different objects based on different parameters. Not sure if there are other cases.
As for different JSON, you have to know a little bit about what data is returned by the API and getting at least the data you need. You can use Json.NET and deserialize to a class or to a dynamic object if you are not entirely sure what to expect or don't want to create a class for it.
I also advise to read the API documentation, or do some sort of (boundary?) testing with the parameters to figure out what data to expect.

Deserialize an object graph with private members in C#

I want to deserialize an object graph in C#, the objects in the graph will have object and collection properties, some of the properties may be private, but I do not need to worry about cyclic object references. My intent is to use the deserialized object graph as test data as an application is being built, for this reason the objects need to be able to be deserialized from the XML prior to any serialization. I would like it to be as easy as possible to freely edit the XML to vary the objects that are constructed. I want the deserialization process not to require nested loops or nested Linq to SQL statements for each tier in the object graph.
I found the DataContractSerializer lacking. It can indeed deserialize to private fields and properties with a private setter but it appears to be incredibly brittle with regard to the processing of the XML input. All it takes is for an element in the XML to be not in quite the right order and it fails. What's more the order it expects the data to be declared in does not necessarily match the order the object members are declared in the class declaration, making it impossible to determine what XML will work without having the data in the objects to start with so that you can serialize it and check what it expects.
The XmlSerializer does not appear to be able to serialize to non-public data of any type.
Since the purpose is to generate test input data for what might be quite simple applications during development I'd rather not have to resort to heavyweight ORM technologies like Entity or Nhibernate.
Is there a simple solution?
[Update]
#Chuck Savage
Thanks very much for your reply. I'm responding in this edit due to the comment character limit.
In the technique you suggested the logic to deserialize each tier of the object hierarchy is maintained in each class, so in a sense you do have nested Linq to SQL just spread out across the various classes involved. This technique also maintains a reference to the XElement from which each object gets its values in each class, so in that sense it isn't so much deserialized as just creating a wrapper around the XML. In the scenario I have in mind I'd ideally like to be deserializing the actual business objects the application will use so an XML wrapper type object like this wouldn't work very well since it would require a distinctly different implementation for test usage compared to production usage.
What I'm really after is something that can do something akin to what the XmlSerializer can do, but which can also deserialize private fields, (or at least properties with no setter). The reason being that the XmlSerializer does what it does with minimal impact on the 'normal' production use of the classes involved (and hence no impact on their implementation).
How about something like this: https://stackoverflow.com/a/10158569/353147
You will have to create your own boilerplate code to go back and forth to xml, but with the included extensions that can be minimized.
Here is another example: https://stackoverflow.com/a/9035905/353147
You can also search my answers on the topic with: user:353147 XElement in the StackOverflow search.

Return already serialzed type in WCF

I'd like to use WCF to set up a cross-plattform WebService. A problem - actually more a performance issue - is that I'd like to return a Type (let's say Event) and I have this event already in XML. So I'd like to avoid deserializing to Event and then WCF is serializing it back to XML. Any idea how to manage this? What I want to achieve is something like "WCF, this method returns an Event-object but I've already serilized it to XML so take it and don't force me to deserialize it first so you can serialize it again".
Daniel
The WCF component that does message (de)serialization is the MessageFormatter.
Hence, you could provide a custom IDispatchMessageFormatter. In the SeralizeReply() method (which returns a Message) you could use the Message.CreateMessage() overload that takes an XmlReader and supply an XmlReader that you create from your XML. And that's it. A bit of work to do, though. You need to decide whether it's worth it.
I think you should use Message class for both service request and response instead of DataContract definition: this should gives you more control on SOAP message structure. If you go down this path however you will need to create a custom proxy (see here for start).
I am not aware of any way to do that. Unless your Event is large, that extra step probably isn't going to hurt you, and at least it is local - and if it is large, your main problem is bandwidth, which will be the same either way.
You can expose the data as XmlElement in the message, which will avoid this step - but then callers will need to know to recognise it as an Event (as all they will see in the mex/wsdl is a chunk-o'-xml).
Ultimately, part of the reason it is doing this is that WCF is an object based model, and the serializer can actually be swapped via many tricks - so in terms of the regular WCF model, the fact that you have xml is irrelevant: that might not actually be anything like what goes down the wire. It needs the object so it can ask the actual serializer to do the job.

WCF - Complex Objects - KnownTypes

Ok, not really sure how to word, but will try my best.
I have a number of WCF services that are setup and run awaiting an object to come in for processing.
WCFServiceA
WCFServiceB
WCFServiceC
Service A will run some processing and decide to send the object onto Service B or C.
So my object has [DataContract] attribute on all classes in it and [DataMember] on all properties.
So so far so good.
But now I well lose all the functionality from my object, as this is now basically a serialised version of the object.
So is it best practice if I want to use a full complex object to include the same assembly in all 3 services as a reference and send things across as "KnownTypes"?? Providing the basic DataContract and DataMember for anything using the services that does not know these types so they can still create these object for the services to run with?
Hope I have worded this correctly and you understand my question here.
:EDIT:
To try and clarify.
The object I am sending can have a "Policy" attached to it, this policy object is a class and can be one of several types, vehicle, house, life, pet policy etc.
But the actual type will not be known by the receiving service. Hence the need for KnownTypes.
I think I just answered my own question!! :)
That was a good explanation of the problem. The draw back I see in this approach is if you are going to update the object , say adding new properties or removing some , all the 3 service needs to be updated with the new assembly.
Using of the known types can sometimes lead to backward compatibility issues when you want to upgrade the objects in live depending on the setup.
Or create a DTO (Data transfer object) with just the properties and pass it across the services as a data contract and strip the complex logic in to a helper class which can be referenced by the services.

How to dispose objects in a Singleton WCF service

How to dispose objects in a Singleton WCF service? I am using Entity Framework (3.5) and returning a bunch of custom POCO objects to the client. The service needs to be alive as it provides cross-client communication and hence Duplex binding is used. I would like dispose all the POCO objects created once they are serialized to the client.
As the session and hence the service is still alive, it looks like Framework is not doing any Garbage collection on these objects and over time the service is crashing with "Insufficient Memory" like error (after about 2GB).
I don't think dispose can be called before return statement, as the objects are not yet serialized by then.
Please suggest a solution.
Thanks in advance.
First, do not use singleton service, why, well your question is the answer.
As I see it your service should be a per call instance managed and the callback channels should be managed on another class or as static member in the service class.
Second, try to see if you keep reference to the poco's you return to client, Cause GC cleans unreferenced stuff. So if you find the reference just assign those member with null and GC will do the rest(you have nothing to worry about method variables).
I think you're on the wrong track here; if your objects are POCO, do they even implement IDisposable (not sure why you would for a POCO class). My guess is you've got something else that is chewing up your memory. Possibly your singleton service is just living too long and collecting too much crap; you might want to look at a different service model. Maybe an instance per session or something like that.
One thing you could do, however, is rather than serializing your POCO objects directly create very simple 'messaging' classes that have only the properties you want to serialize and send those instead. You could copy the properties to your message objects and then dispose your database objects immediately.

Categories