How do you use a OData source as a System.Data.DataTable? - c#

Is there a convinient way of fetching a OData source into a System.Data.DataTable?
The most common use case of OData seems to be to let System.Data.Services.Client map OData entities to .NET objects, but this requires one to know the structures before run time, which I do not. My current workaround is to go low level and fetch the XML myself, loop over items in the DOM and put them into a new DataTable. I am looking for a higher level approach, if it exists.

Currently I don't know of any such solution, the XML based reading is probably the best possible. We're working on a library (ODataLib) which will allow you to read and write OData without strongly typed .NET objects.
A first CTP is part of this release: http://blogs.msdn.com/b/astoriateam/archive/2011/06/30/announcing-wcf-data-services-june-2011-ctp-for-net4-amp-sl4.aspx
It should be able to read JSON payloads (and write both JSON and ATOM).
Somewhat older source code drop is here: http://odata.codeplex.com/releases/view/60787 but it doesn't have the readers implemented yet.

Related

Xml Reading with backward Compatibility

The XML that i am currently working is directly formed using XML serializer (Serializing Class and its nested counter parts)
Also if there is an addition of a new Property is directly handled by the serializer but the problem comes when there is a deletion of property (value Type) or removal of and entire class or addition of class
I wish to read the old as well as the new XML files.... I cant seem to figure out how..
Process
Some ways
But i don't think these are good for a maintainable code
1) Make the custom XML parser (this will be less flexible as every time the change is done the parser has to be updated and hence tested again).
2) Use multiple Models then migrate from old to new (Taking essential components)
3) Export Old file and import the new file (This will also require another XML file and may b related to point 2)
4) Any other means (Please suggest)
I am not well versed with XML and its versioning.
Also is XML a good choice for this or Any other file type/DB that i can use in place of XML
Any help in this regard would be helpful.
In most ways, XmlSerializer already has pretty good version support built in. In most cases, if you add or remove elements it isn't a problem: extra (unexpected) data will be silently ignored - or put into the [XmlAnyElement] / [XmlAnyAttribute] member (if one) for round-trip. Any missing data just won't be initialized. The only noticeable problem is with sub-types, but adding and removing sub-types (or entire types) is going to be fairly fundamental to any serializer. One common option in the case of sub-types is: use a single model, but just don't remove any sub-types (adding sub-types is fine, assuming you don't need to be forwards compatible). However if this is not possible, the multiple models (model per revision) is not a bad approach.
I usually follow your solution "#2" where I namespace version my models (Myapp.Models.V1.MyModel), this way you can maintain backward compatibility with clients still using the older schema (or in your case, loading an older file).
As suggested in the comments, you can use a simple attribute on the root node to determine the version, and use either xmlreader, or even a simple regex on the first line of the file to read the version number.
As far as your second question, about file type/db, depending on your needs, I would highly recommend looking at a document database like MongoDB or RavenDB, as implementation is straightforward/simple, and does not require the use of an ORM tool like entity framework to handle proper separation of concerns. If you need something portable, in the cases such as desktop app "save file", SqlLite is a good file based databases, but you will likely want to use an ORM for mapping your model to your database.
Links:
MongoDB: http://www.mongodb.org/
RavenDB: http://ravendb.net/
Sqllite: http://www.sqlite.org/

Is there a way to emulate Jackson's Mixins in JSON.Net?

I'm currently working on a few utility libraries to aid in the integration between two existing systems. As part of the integration process, I need to be able to convert objects to JSON.
For various reasons, I need to be able to modify the serialized field names (i.e convert camel case to snake case, and in some instances change the field name altogether).
One half of the system is written (mostly) in Java, and is entirely under my control. My preferred solution for serializing / deserializing JSON is to use Jackson. For a variety of reasons, it is considered a risk for us to modify the existing entity classes in order to apply the required attributes for Jackson to produce the correct JSON. Fortunately, Jackson provides Mixins, which essentially allow me to apply annotations dynamically. This is far, far superior to writing custom serializers and deserializers to do the same job.
The other half of the system is an ASP.Net application, and again I would like to modify as little of the existing code as I can get away with. I am currently using JSON.Net for serialization / deserialization, and it seems to support everything I need, including defining attributes to override property names.
However, one thing I can't seem to work out is whether JSON.Net supports the same concept of Mixins as Jackson does. If I can get away with it, I'd like to avoid modifying the existing .NET entity classes to include new attributes, but I can't find any documentation suggesting that this feature exists within JSON.Net.
So, does anybody know if there is a (documented / undocumented) way to apply Jackson-like mixins using JSON.Net, or will I need to write customer serializers / deserializers?
Not sure if this helps, but there is sort of external implementation of Jackson's mix-in handling, as part of ClassMate project. Library does many other things too, so I don't know how easy it'd be to extract part that handles merging of regular annotations and mix-ins.

Acord Standard for Insurance. Has anybody dealt with this mess?

We need to implement a WCF Webservice using the ACORD Standard.
However, I don't know where to start with this since this standard is HUMONGOUS and very convoluted. A total chaos to my eyes.
I am trying to use WSCF.Blue to extract the classes from the multiple XSD I have but so far all I get is a bunch of crap: A .cs file with 50,000+ lines of code that freezes my VS2010 all the time.
Has anybody walked already thru the Valley of Death (ACORD Standard) and made it? I really would appreciate some help.
I wrote a ACORD to c# class library converter which was then used in several large commercial insurance products. It featured a very nice mapping of all of the ACORD XML into nice concise, extendable C# classes. So I know from whence you come!
Once you dig into it its not so bad, but I maintain the average coder will not 'get it' for about 3-4 months if they work at it full time (assuming anything but inquiry style messages). The real problem comes when trying to do mapping from a backend database and to/from another ACORD WS. All of the carriers, vendors, and agencies have custom rules.
My best suggestion is to find working code examples (I have tons if you need them) and maybe even a vendor or carrier who will let you hook up to a ACORD ws in a test environment.
It sounds like you are heading down the right path but are lost in the forest.
The ACORD Standard is huge and intentionally so, as it provides support for hundreds of different messages. Just as you do not download all of Wikipedia to get just a few articles, you do not need all of the classes in the ACORD Standard to support an implementation of a few messages. If you know what messages you need to support then you can generate a subset of the full XSD that will be quite manageable.
As mentioned in Hugh’s response, for any one message only a fraction of the full XSD is used. How you go about doing that will depend on the specifics of your project. If you are looking for ideas on how generate a subset of the full XSD try reaching out to the ACORD staff for help at PCS#acord.org. They should be able to offer you some help in getting started.
I have worked with the Accord PCS exposure reporting standards and yes it was a nightmare. I have also worked with other large standards like FPML and SportsML.
You need to work out exactly which types from the schema that are needed. How you do this is up to you, but VS schema viewer should be able to handle it. If not try XmlSpy or just go through it by hand if you have to. Make sure you have a good BA to hand...
Chances are you will find that you can meet your requirements by using around 1% of the types available in the standard.
What you'll probably find is that you can express the core objects with a very minimal set of values, as most nodes will be minOccurs=0 or nillable.
Then you can use the /element switch on xsd.exe to generate the code for just the types you need.
As one commenter says there is no easy pill to swallow here. The irony is that standards are supposed to make everyone's lives easier.
If you are looking to read/write ACORD documents using .NET, I just stumbled across the "IVC Software Factory for ACORD Standards" on CodePlex at http://ivc.codeplex.com.
From the limited documentation it appears as if this library can convert objects to ACORD XML documents, and vice-versa. The source code comes with different "providers" i.e. different ACORD transaction types, like 103 or 121.
Hope this helps.
I would recommend not creating a model for the entire standard. One could just pass XML and not serialize into a model but instead load it into XDocument/XElement and use Linq to query it and update the DOM using Linq to Xml. So, one is not loading the XML to a strongly typed model, but just loading the XML. There is no model, just an XML document.
From there, one can pick the data off of the XML as needed.
Using this approach, the code will be ugly and have little context since XElements will be passed everywhere, and there will be tons of magic strings of XPaths to query and define elements, but it can work. Also, everything is a string so there will be utility conversion methods to convert to numbers, date times, etc.
From my prospective, I have modeled part of the Acord into an object model using the XmlSerializer but it's well over 500 classes. The model was not tooled from XSD or other, but crafted manually and took some time. Tooling will produce monster unusable classes (as you have mentioned) and/or flat out crash. As an example, I tried to load the XSD into Stylus Studio and it crashed several times.
So, your best bet if your strapped for time is loading into an XDocument as opposed to trying to map out everything in a model. I know that sucks but Acord in general is basically a huge data hot mess.

.net wrapper for OData to access any data source

I am looking for an OData wrapper in C# that can talk to any OData datasource and return the result as properties instead of the raw XML. I looked at http://odata.codeplex.com/ but it is designed around the concept of pointing to a specific datasource and building code that maps to it.
We need to create code where at runtime we are pointed to an OData datasource and read the metadata and then interactively call it with queries and then use the returned data. (I also believe Linq won't work for us because we have end users creating queries once we connect - no writing code and compiling.)
Is there anything out there?
thanks - dave
I assume you want to consume arbitrary OData service as a client, right? For that I would suggest using ODataLib (http://www.nuget.org/packages/Microsoft.Data.OData). It is a reader and writer for OData, nothing more. So it will require more code from you as compared to the WCF Data Services, but it allows consuming arbitrary OData payloads without the necessity to generate the matching types. You might also want to check out this blog for the start: http://blogs.msdn.com/b/astoriateam/archive/2011/10/14/introducing-the-odata-library.aspx
You might have to write a custom provider for what you are trying to achieve.
The following blog series are pretty helpful:
http://blogs.msdn.com/b/alexj/archive/2010/01/07/data-service-providers-getting-started.aspx
http://blogs.msdn.com/b/vitek/archive/2010/02/25/data-services-expressions-part-1-intro.aspx

What is the best approach to versioning the tracked data within workflows?

This is a general question concerning the Workflow Foundation (.NET 3.5) and versioning the data that it works with. We have a lot of custom activities that work with some data and this data may be interesting also for the future analysis of the already completed workflows (provided that we configure the tracking in such a way that it stores it in a serialized form).
It may be necessary to show the data from the past in the UI, but the data inevitably changes the structure (class definition / internal structure if it's dynamic) and the redeployed version of our library will contain the new data definition while the serialized data in the tracking database will be still in the old structure.
Is it better to use dynamic structures that don't change from the beginning (like a property bag) or rather later deal with the redeployment and somehow transform the serialized BLOB into the new one ? Have you ever used some approach in a similar scenario ?
A lot depends on how you deploy your application. If you use a strong name and deploy to the GAC or multiple private assembly paths deserializing a workflow will deserialize the exact version of your class. That means that you code must be able to work with multiple versions and that can be a bit of a pain. Storing data in a property bag is not going to help you there. If you use assembly redirects to point to the current version of an activity solves that part and I suppose using a property bag would make life simpler then. That said I tend to stick with dependency properties and regular serializable classes so far.
I did a series of blog posts about long running workflows and versioning where you run into exactly the same problem. Check here for more details.

Categories