I have a use case where we generate a lot of custom profiles based on the specific data modelling requirements. I am trying to find a way to create models out of it(c# classes) so that it can be serialized and deserialized and represented in the way that conforms to the profile constraints. I have seen many implementations to validate against profiles, but none on the generation aspect. Please let me know if there is a way to achieve this.
PS: All the resource types used are FHIR base resources.
Serialization and deserialization of FHIR instances is the same for all resources, regardless of profile. As such, there's no need for profile-specific code. Just use the generic .NET reference implementation. Profiles only change constraints, not what elements are named or how they appear in instances. If you're wanting profile-specific code to handle validation, there currently aren't any general-purpose solutions that do that - you'd have to create your own. However, the generic .NET reference implementation will perform the necessary validation if provided with the relevant StructureDefinitions.
Related
I am currently working with a piece of software known as Kofax TotalAgility or KTA for short.
This is Business Process Automation Software, which I have the "pleasure" of expanding with custom .net libraries.
I have been creating a MS Graph library to perform actions with the MS Graph API. The API works great and I am quite pleased with how it turned out.
However due to the way KTA is accessing methods in classes I have used "Data classes" (dont know if that is the right word) to use as input parameters for my methods. To be clear these methods have no functionality other than to store data for methods to use, the reason I am doing this, is because of the way it is structured in the KTA class inspector (I am assuming that KTA uses the IL Code from my library to create a list of classes and methods).
This is what I am expecting the user is shown when they are using my methods. As you can see by using classes as input parameters I get this nice hierarchical structure.
By using classes as input parameters another issue occurs which is that my "Data Classes" are show in the list of classes, which produces alot of unnecessary clutter.
Is there a way to hide these classes from the inspector? I get that it might be an internal KTA issue, which of course would mean I am not asking in the right place, and it is an internal Kofax issue.
However if there is some C# or .NET way of doing this, that would be preferable.
There are a number of different terms for the data/parameter classes that you mention, such as DTO (data transfer objects), POCO (plain old C# objects), or the one that you can see in the KTA product dlls: model classes.
There is not a direct way to hide public classes from KTA. However, when you use the KTA API via the TotalAgility.Sdk.dll, you notice that you don’t see all of the parameter classes mixed in with the list of the classes that hold the SDK functions. The reason is just that these objects are in a separate referenced assembly: Agility.Sdk.Model.dll. When you are configuring a .NET activity/action in KTA, it will only list the classes directly in the assembly that you specify, not referenced assemblies.
If you are using local assembly references in KTA, then this should work because you can just have your referenced assembly in the same folder as your main dll. However if you are ILMerging into a single dll to can add it to the .NET assembly store, then this approach won’t work.
When ILMerged together, the best you can do is to have your parameter classes grouped in a namespace that helps make it clear. What I do is have a main project with just one class that acts as a wrapper for any functions I want to expose. Then use ILMerge with the internalize option, which changes visibility to internal for any types not in the primary assembly. To allow the model classes to still be public, I keep them in a specific namespace and add that namespace to the exclude list for the internalize command. See Internalizing Assemblies with ILMerge for more detail.
Keep in mind that anyone seeing this list is configuring a function call with your dll. Even if they are not a skilled developer, they should at least have some competence for this type of task (hopefully). So even if the list shows a bunch of model classes, it shouldn’t be too hard to follow instructions if you tell them which class is to be used.
I tried the SOLID architecture within my last project.
I have an Interface called ILog and a class Logthat implemented ILog. (In my understanding that should be done to follow the Open/Closed principle)
In order to stay open for extensions I implemented the front end via List<ILog> instead of with the firm implementation List<Log>.
Serializing the List<ILog> is no problem, but deserializing is. I understand why of course, because the deserializer does not know which implementation class it should use.
Question:
How to know into which concrete type to deserialize an object that was serialized through an interface reference?
Serializing the List is no problem, but deserializing is.
If you are deserializing you necessarily need to somehow communicate to your serializer which conrete representation of your interface to use. In case of Json.NET you could use the JsonConstructorAttribute (see also this answer) or resolvers in combination with dependency injection.
Question: What does it help me to work with List if I have to define the specific implementation-class for data storage / data import anyways?
Interfaces decouple your code from the actual implementation, which results in various benefits. For example in terms of unit testing they make mocking easier (since you can satisfy the interface with a mocked instance instead of being forced to use the "real" class). Also Interfaces allow you to benefit from covariance/contravariance, which you wouldn't have with a classes in C#. For further reading on the benefits of interfaces, have a look at the various answers to this question or see this blog post.
The above being said, interfaces always introduce a certain level of overhead/abstraction and you need to evaluate per case/situation, whether they make sense or not.
What would be the best way to handle the data-storage of interface objects or are they only used at runtime?
You necessarily need to store concrete representations, which means at the time of persistance, you need to decide which concrete implementation to use for storage (and later deserialization).
I'd like to use C#'s reflection and custom attributes to simplify registering a series of types with a central management class (i.e. it provides static methods taking a string key and invoking/retrieving the proper method/parameter for the associated type). Looking at other questions here and a couple places elsewhere, it seems like the best way of doing so is to simply iterate through all public types of the assembly -- since it's intended to be a library -- and check if each type has the proper attribute before adding the relevant values to the underlying Dictionaries. The reflection and iteration will definitely be slow, but I can live with it since it should only occur once.
Unfortunately, I can't figure out how to get an attribute from a type. For methods and assemblies, I can use CustomAttributeExtensions.GetCustomAttribute<MyAttribute>(base) from System.Reflection.Extensions, but that doesn't provide an overload for Type; the same for Assembly.GetCustomAttribute(Assembly, Type) and the .IsDefined(...) methods used in this question. Other suggestions use methods on the Type itself that, from the documentation, seem to be loaded from mscorelib.dll, but it didn't seem to be showing up in Intellisense even after adding the reference and I'm not sure how that .dll interacts with .NET Standard, anyway (as in, does it reduce the ability to run on arbitrary platforms at all?)
Am I missing something obvious, or is it really this hard to get an Attribute back off of a Type?
Try typeof(YourType).GetTypeInfo().GetCustomAttributes();
The .NET Framework ships with System.Runtime.Serialization.Json.DataContractJsonSerializer and System.Web.Script.Serialization.JavaScriptSerializer, both of which de/serialize JSON. How do I know when to choose one of these types over the other? MSDN doesn't make it clear what their relative advantages are.
We have several projects that consume or emit JSON, and the class selected for each thus far has depended on the opinion of the primary dev on each project. Some are simple, two have complex logic regarding producing managed types from JSON (the types do not map closely to the streams) but don't have any emphasis on speed, one requires speed. None interact with WCF, at least as of now.
While I'm interested in alternative libraries, I am hoping that somebody might have an answer to my question too.
The DataContractJsonSerializer is intended for use with WCF client applications where the serialized types are typically POCO classes with the DataContract attribute applied to them. No DataContract, no serialization. The mapping mechanism of WCF makes the sending and receiving very simple, but only if your platform is homogeneous. If you start mixing in different toolsets, your program might go sideways.
The JavaScriptSerializer can serialize any type, including anonymous types (one way), and does so in a more conformant way. You lose the "automagic" of WCF, but you gain more integration options.
As you can see by the comments, there are a lot of options out there for AJAX serialization, and to address your speed vs. maintainability questions, it might be worth investigating them to find a solution that meets the needs of all the teams, to reduce maintainability issues in the long term as everybody does things their own way.
2014-04-07 UPDATE:
I suggest using JSON.NET if you can. See http://james.newtonking.com/json Feature Comparison for a review of the 3 libraries considered in this question.
2015-05-26 UPDATE:
If your company requires the use of commercially licensable products, or you need every last bit of performance, you may also want to check out https://servicestack.net/.
Both do approximately the same but using very different infrastructure thus applying different restrictions on the classes you want to serialize/deserialize and providing different degree of flexibility in tuning the serialization/deserialization process.
For DataContractJsonSerializer you must mark all classes you want to serialize using DataContract atrtibute and all members using DataMember attribute. As well as if some of you classes have enum members, then the enums also must be marked as DataContract and each enum member - with EnumMember attribute.
Also DataContractJsonSerializer allows you fine control over the whole process of serialization/deserialization by altering types resolution logic and replacing the types you serialize with surrogates.
For JavaScriptSerializer you must provide parameterless constructor if you plan on deserializing objects from json string.
For me, I usually use JavaScriptSerializer in presentation logic, where there's a simple model I want to render in Json together with page, without additional ajax requests. And I even usually don't have to deserialize them back to c# - so there's no overhead at all. But if it's persistence logic, where I want to save objects into a data store (usually no-sql storage), to load them later, I prefer using DataContractJsonSerializer because the overhead of putting attributes is worth of flexibility in the serialization/deserialization process tuning, especially when it comes to loading of serialized data into the objects of the newer version, with updated definitions
Personally, I think that DataContractJsonSerializer reeks of over-engineering. I'd skip it and go with JavaScriptSerializer. In the event where JavaScriptSerializer isn't available, you can use FridayThe13th (a library I wrote ;p).
Is it possible to serialize complex object with Protocol Buffers C# (ProtoBuf-net) without using Protocontract and proto files ?
[ProtoBuf.ProtoContract(ImplicitFields = ProtoBuf.ImplicitFields.AllPublic)]
I have tried to use the ProtoContract but even then I can't serialize object (It is a LLBLGen ORM object).
Yes; there are various options here;
firstly, note that "implicit fields" is brittle if you add members, since it has to make more guesses than I would like; only use that with stable contracts
you can apply a default behaviour globally via GlobalSettings, but I tend to advise against it
protobuf-net v1 can also work with:
XmlType/XmlElement attribute pairs, as long as the XmlElement specifies an Order
DataContract/DataMember attribute pairs, as long as the DataMember specifies an Order
partial classes; even for properties, via ProtoPartialMember attribute(s), etc
protobuf-net v2 can be used 100% without attributes of any kind, by using a TypeModel to describe the interesting types at runtime; this can also compile the model to a dedicated serialization dll if you need (in particular for use with AOT-dependent devices)
I can advise more, but there are a number of options presented; tell me which is/are most appropriate and I can add more detail.
Re .proto files; those are (and have always been) entirely optional with protobuf-net, as I recognise that there are a lot of cases where a code-first approach (or retrofit of serialization to an existing model) is useful. Three is a code-generator if you choose to use .proto, of course.