I have the following class structure in my application:
[ProtoContract]
public abstract class WebSyncedObject
{
[ProtoMember(1)]
public DateTime SystemTime { get; set; }
[ProtoMember(2)]
public bool TimeSynchronized { get; set; }
[ProtoMember(3)]
public ulong RelativeTime { get; set; }
[ProtoMember(4)]
public Guid BootID { get; set; }
protected WebSyncedObject()
{
BootID = BootID.GetBootID();
if (BootID == Guid.Empty) return;
TimeSynchronized = Time.TimeSynchronized;
RelativeTime = Time.RelativeTime;
SystemTime = DateTime.Now;
}
}
[ProtoContract]
public class GPSReading : WebSyncedObject
{
[ProtoMember(1)]
public DateTime SatelliteTime { get; set; }
[ProtoMember(2)]
public decimal Latitude { get; set; }
[ProtoMember(3)]
public decimal Longitude { get; set; }
[ProtoMember(4)]
public int NumSatellites { get; set; }
[ProtoMember(5)]
public decimal SpeedKM { get; set; }
}
[ProtoContract]
public class TemperatureReading : WebSyncedObject
{
[ProtoMember(1)]
public decimal Temperature { get; set; }
[ProtoMember(2)]
public int NodeID { get; set; }
[ProtoMember(3)]
public string ProbeIdentifier { get; set; }
}
I then construct a List<WebSynchedObject> with data of both types, and try to serialize with Protobuf-net when I get the following exception:
InvalidOperationException
Unexpected sub-type: Logger.TemperatureReading
I've read about the ProtoInclude attribute, but I don't want to use that as my code needs to be easily extendable, and I'm not sure on how the numbering on the RuntimeTypeModel approach is supposed to work, since I've also seen warnings about generating that automagically.
Is there any way to achieve this whilst making it extendable?
Ultimately, there needs to be a robust, reliable and repeatable way of the library identifying a specific sub-type (GPSReading, etc) with a unique identifier (a field-number). In many cases, the most convenient way to do that is via attributes. However, if this is not an option, you can also do this at runtime - perhaps reading the identifiers some configuration file. It would not be a good idea to just say (at runtime) "find all the available sub-types, order them alphabetically, and increment them starting at (say) 10", because in a later build you might have added an AltitudeReading, which would change the number of everything, breaking the existing data. But as long as you can define these in a repeatable manner, then all is good. For example, with attributes...
[ProtoInclude(10, typeof(GPSReading))]
[ProtoInclude(11, typeof(TemperatureReading))]
[ProtoInclude(12, typeof(AltitudeReading))]
But you could also do something in a text file, or an xml configuration file... maybe:
<add key="10" type="Some.Namespace.GPSReading"/>
<add key="11" type="Some.Namespace.TemperatureReading"/>
<add key="12" type="Some.Namespace.AltitudeReading"/>
and add you own code that reads the config file, and calls:
int key = int.Parse(element.GetAttributeValue("key"));
Type type = someAssembly.GetType(element.GetAttributeValue("type"));
RuntimeTypeModel.Default[typeof(WebSyncedObject)].AddSubType(key, type);
Again, to emphasize: the important thing is that the numbers associated with each sub-type must be robustly repeatable in the future. As long as you can guarantee that, it is not required to use attributes. But the model does need to know the identifiers.
Related
The order of the members can be set up in the ReSharper options in Languages, C#, Type Members Layout. Resharper is doing it correctly. However, I would like to exclude certain classes which contain JSONProperty attribute.
So for example, refer class below. I don't want Resharper to reorder the members in it.
internal class ExecutionParametersJson
{
[JsonProperty("Factor")]
public string Factor { get; set; }
[JsonProperty("Penalty")]
public string Penalty { get; set; }
[JsonProperty("Origin")]
public string Origin { get; set; }
[JsonProperty("InterFactor")]
public string InterFactor { get; set; }
}
I am using latest version of Resharper.
Can anyone show me how to configure Resharper to achieve this?
Actually, I tried putting Order attribute in it. But that didn't do any difference.
[JsonProperty("Factor", Order = 1)]
public string Factor{ get; set; }
However, I would like to exclude certain classes which contain JSONProperty attribute.
Yes that can be done rather easily.
Given this exammple code, note the additional properties I included purely to prove a later point:
internal class ExecutionParametersJson
{
[JsonProperty("Factor")]
public string Factor { get; set; }
public string SomeProperty { get; set; }
[JsonProperty("Penalty")]
public string Penalty { get; set; }
[JsonProperty("Origin")]
public string Origin { get; set; }
public int SomeOtherProperty { get; set; }
[JsonProperty("InterFactor")]
public string InterFactor { get; set; }
}
...then choose Resharper.Options.Code Editing.c#.File Layout, the list of patterns appears:
Choose your preferred pattern. Here I chose Default Pattern. I've been adding to it in the past so it may look different.
Scroll down till you find a region for Properties, you may have to create it like so:
Select Properties, Indexers, ensure Sort By is set to Name.
Double-click Properties, Indexers. The conditions editor appears. Add a top-level And condition; Not and specify the JsonProperty.
Now run Resharper.Edit.Cleanup Code on the file in question. All properties, except those with a JsonProperty attribute, will be sorted alphabetically and placed into a region titled Properties.
internal class ExecutionParametersJson
{
#region Properties
public int SomeOtherProperty { get; set; }
public string SomeProperty { get; set; }
#endregion
[JsonProperty("Factor")]
public string Factor { get; set; }
[JsonProperty("Penalty")]
public string Penalty { get; set; }
[JsonProperty("Origin")]
public string Origin { get; set; }
[JsonProperty("InterFactor")]
public string InterFactor { get; set; }
}
Now the additional properties I included here was just to prove how you format members conditionally. Feel free to remove these properties; the #region or customise to your liking.
Moving on
You may want to tidy this up a bit and create a specific pattern in Resharper called JSON Classes or some such.
I have some classes that are created and maintained by Entity Framework. These classes represent tables in my DB. In two of these tables, I have very similar fields. For example (pseudo-objects):
public class RandomObject1
{
int Identifier { get; set; }
int ObjectType { get; set; }
int SourceID { get; set; }
string OriginationPont { get; set; }
string PhoneNumber { get; set; }
decimal Cost { get; set; }
int OtherThing1 { get; set; }
int OtherThing2 { get; set; }
int OtherThing3 { get; set; }
}
public class RandomObject2
{
int Identifier { get; set; }
int ObjectType { get; set; }
int SourceID { get; set; }
string OriginationPont { get; set; }
string PhoneNumber { get; set; }
decimal Cost { get; set; }
double CashValue1 { get; set; }
decimal Snowman2 { get; set; }
int BigFurryTree3 { get; set; }
}
Note that the first few fields of these two objects are the same, and the processes for populating those fields are also the same. Normally in these situations I would have an interface that declares just the first few fields so that I can pass this object as an interface to various processes.
However, in this case, I don't control the code that builds these classes, and really don't want to have to edit the resulting .cs files from the Entity Framework every time it is regenerated.
I was wondering if there is a slick way that I am missing to use generics to do something like the following:
// This method will populate SourceID, OriginationPoint, PhoneNumber and Cost
public void GenerateOriginationInformation<T>(ValidationInformation info, T objectToManipulate) where T : RandomObject1 || RandomObject2
{
objectToManipulate.SourceID = GenerateSourceID(info);
objectToManipulate.OriginationPoint = GenerateOriginationPoint(info);
objectToManipulate.PhoneNumber = FindPhoneNumberByOrigination(info);
objectToManipulate.Cost = DetermineCostBySourceAndOrigination(info);
}
Right now, I have to build an entire object/layer that will populate and return the correct object, but results in me doing most of the code for these things twice!
public void GenerateOriginationInformation(ValidationInformation into, RandomObject1 objectToManipulate)
{
objectToManipulate.SourceID = GenerateSourceID(info);
objectToManipulate.OriginationPoint = GenerateOriginationPoint(info);
objectToManipulate.PhoneNumber = FindPhoneNumberByOrigination(info);
objectToManipulate.Cost = DetermineCostBySourceAndOrigination(info);
}
public void GenerateOriginationInformation(ValidationInformation into, RandomObject2 objectToManipulate)
{
objectToManipulate.SourceID = GenerateSourceID(info);
objectToManipulate.OriginationPoint = GenerateOriginationPoint(info);
objectToManipulate.PhoneNumber = FindPhoneNumberByOrigination(info);
objectToManipulate.Cost = DetermineCostBySourceAndOrigination(info);
}
At first, this doesn't look too bad, but this code is highly over-simplified for the purposes of explanation and brevity. Is there a cleaner way to use generics to get the two methods to work as one since I can't implement an interface?
I don't control the code that builds these classes, and really don't want to have to edit the resulting .cs files from the Entity Framework every time it is regenerated
answer in comment: "it is EDMX"
The generated classes from your EDMX designer are partial by default (no additional work necessary by you) so you can create a code file next to the generated files with a partial as well in which you make the type implement an interface.
Generated class
public partial class RandomObject1
Your code file placed in the same project
public partial class RandomObject1 : ICommonInterface
The classes that are generated by Entity Framework are probably "partial". This means that you can write your own partial class to add features of your own to that generated class.
Such as this:
public partial class RandomObject1: ICommonInterface
{
}
With "ICommonInterface" an interface that specified the shared properties.
I have a little problem, which I'm guessing should have been solved with better code design from the beginning. But here I am.
I have an app with a pretty large user base. The app uses profiles. The profiles are are deserialized from file when starting the app.
In new releases the profile class sometimes gets new properties. If the profile is deserialized from an older version these properties will be uninitialized. Where as they will have some set default values if the profile is created with the current version of the app.
Is there a simple way of initializing a property with a default value if the serialized version doesn't have it?
You can specify a method to run after deserializing where you could set default values:
using System.Runtime.Serialization;
[Serializable]
class Car
{
public int Id { get; set; }
public string Make { get; set; }
public int Doors { get; set; }
public string Foo { get; set; } // added property
...
[OnDeserialized()]
internal void OnDeserializedMethod(StreamingContext context)
{
if (string.IsNullOrEmpty(this.Foo))
this.Foo = "Ziggy";
}
}
You might want to consider ProtoBuf-NET which is a data contract binary serializer. It is much more flexible about these things, more options, faster and creates smaller output. I just double checked to be sure, and ProtoBuf will not undo fields it doesnt have information for. So:
[ProtoContract]
class Car
{
[ProtoMember(1)]
public int Id { get; set; }
[ProtoMember(2)]
public string Make { get; set; }
[ProtoMember(3)]
public int Doors { get; set; }
[ProtoMember(4)]
public string Foo { get; set; } // new prop
public Car()
{
this.Foo = "Ziggy";
}
...
}
If there is no serialized value for Foo, the old value from the ctor is retained. So you could initialize new properties there and not have to worry about them getting reset to null. If you have a lot of properties like Bitmap, Font and Rectangle you might want to stay with the BinaryFormatter.
Background I am using EF6 and is free to change the database.
I ran into this problem today. Say I have:
public class Company
{
public int Id { get; set; }
public string Name { get; set; }
}
public class Address
{
public int Id { get; set; }
public int CompanyId { get; set; }
public string Line1 { get; set; }
public string Line2 { get; set; }
}
public class Invoice
{
public int Id { get; set; }
public int CompanyId { get; set; }
public int AddressId { get; set; }
public bool IsCompleted { get; set; }
}
The users are allowed to update Company and Address. Users are also allowed to update Invoice. But since it is a financial document, it must somehow save a snapshot of the address if the user marks IsCompleted to true.
Currently it is done in the following way:
public class Invoice
{
public int Id { get; set; }
public int CompanyId { get; set; }
public int AddressId { get; set; }
public bool IsCompleted { get; set; }
//Auditing fields
public string CompanyName { get; set; }
public string AddressLine1 { get; set; }
public string AddressLine2 { get; set; }
}
I think this is hard to follow. I was thinking:
Option 1: Save an audit to an audit table of its own:
public class Invoice
{
public int Id { get; set; }
public int CompanyId { get; set; }
public int AddressId { get; set; }
public bool IsCompleted { get; set; }
//Null if IsCompleted = false.
public DateTime? CompletedTimeStamp { get; set; }
}
public class CompanyAudit
{
public int CompanyId { get; set; }
public string Name { get; set; }
public DateTime TimeStamp { get; set; }
}
public class AddressAudit
{
public int AddressId { get; set; }
public string Line1 { get; set; }
public string Line2 { get; set; }
public DateTime TimeStamp { get; set; }
}
But then that seems like a lot of tables to create and a lot of work if we do change the schema for Company and Address. Also, it's not very robust. I can't reuse this for other documents without a bunch of wiring. However, this is what I found mostly on the internet. One audit table for each table.
Option 2: Save all the audit to the same table:
public class Audit
{
public int DocumentId { get; set; }
public string DocumentType { get; set; }
public string JsonData { get; set; }
public DateTime TimeStamp { get; set; }
}
But then, this seems like it's not very standard. I have never saved Json data to a SQL database before. Is this bad? If so What could go wrong?
Should I go with Option 1 or Option 2?
Short answer
Assuming you are using SQL Server, I would recommend creating some XML-serializable DTOs for that job and storing XML in a dedicated column, using the XML datatype.
Slightly longer answer
I have already gone that exact same path. We needed to save a snapshot of data at the point that it has been printed out, and there were many tables involved that would have been duplicated in the process.
Requirements and Evaluation
We didn't want to incorporate an additional technology (e.g. File System as proposed by Andreas or some NoSQL/document database), but store everything in SQL Server, as otherwise this would have complicated backup scenarios, deployment, maintenance and so on.
We wanted something easy to understand. New developers should be familiar with the technology used. Architecture shouldn't be influenced too much.
For serialization, there are several options: XML, JSON, BinaryFormatter, DataContractSerializer, Protocol Buffers... Our requirements: Easy versioning (for added properties or relationships), readability, conformance with SQL Server.
Easy versioning should be possible using all mentioned formats. Readability: XML and JSON win here. Conformance with SQL Server: XML is supported in SQL Server natively and was our choice.
Implementation
We did several things:
Create additional DTOs in our database project, side by side with the existing Entities, not for usage with EF but for XML serialization. They are annotated with XmlAttributes and resemble a complex, self-contained structure with everything that is needed to hold the document's data. The root InvoiceSnapshot class has Parse and ToXml methods to support serialization.
Update our Entities to include the snapshots, where required:
public string InvoiceXml { get; set; }
public InvoiceSnapshot Invoice
{
get
{
return this.InvoiceXml != null
? InvoiceSnapshot.Parse(this.InvoiceXml)
: null;
}
set { this.InvoiceXml = value != null ? value.ToXml() : null; }
}
Update the entity configuration to create an XML column and ignore the InvoiceSnapshot property:
public class InvoiceEntityConfig : EntityTypeConfiguration<InvoiceEntity>
{
public InvoiceEntityConfig()
{
this.Property(c => c.InvoiceXml).HasColumnType("xml");
this.Ignore(c => c.Invoice);
}
}
Modify our business objects so that they load themselves either from Entities (editable state) or from XML-DTOs (snapshot, readonly state). We use interfaces on both where they help streamlining the process.
Further steps
You should add metadata for common queries in separate scalar columns and index them. Retrieve the xml data only when you really want to show the invoice.
You can look into what SQL Server can do for you regarding XML, especially when you need to query based on attributes. It can index them, and you can use XPath in queries.
Sign or hash your xml to make sure that snapshot data won't be tampered with.
Option2
BUT
A better way is to store the genrated invoice (pdf)
and all reversions of it in normal files.
You can still use your Invoice table
but then u dont need to wory if some customer-data change
and an user reprints an older document.
To store the generated documents it's near the same
as to store the model in JsonData but u dont Need to
safe the Version of your template and template generator.
Option1 is just brut force and is maybe better released
with the "Event Store", "Event Sourcing" and "Query and Command" pattern.
public class Document //or DocumentFile
{
public int DocumentId { get; set; }
public string DocumentType { get; set; }
public string FilePath { get; set; }
[Index]
public String Owner { get; set;} //exp. "Customer:Id", "Contact:Id" maybe just int
[Index]
public String Reference { get; set; } //exp. "Invoice:Id", "Contract:Id" maybe just int
public DateTime TimeStamp { get; set; } //maybe int Reversion
}
I have a 3rd party application that provides an object with many "attributes", which are simply pairs of (string) keys and values. The value types can be either strings, DateTime, Int32 or Int64.
I need to create my own class to represent this object, in a convenient way. I'm creating a WCF service that provides this object to clients, so I need it to be very easy and clean.
The keys of the attributes will be presented as an Enum for the clients (to hide the information of the specific key strings of the 3rd party application). However, I'm not sure how to represent the values. Here are some of the options:
Option 1: Have different collection per attribute values, seems ugly but will be very easy for clients to use
public class MyObject
{
public Dictionary<MyTextAttributeKeysEnum, string> TextAttributes { get; set; }
public Dictionary<MyDateAttributeKeysEnum, DateTime> DateAttributes { get; set; }
public Dictionary<MyNumAttributeKeysEnum, long> NumericAttributes { get; set; }
public string Name { get; set; }
public string Id{ get; set; }
Option 2: Convert all of the attributes to strings
public class MyObject
{
public Dictionary<MyAttributeKeysEnum, string> MyAttributes { get; set; }
public string Name { get; set; }
public string Id{ get; set; }
Option 3: Keep them as objects, let the clients bother with casting and converting
public class MyObject
{
public Dictionary<MyAttributeKeysEnum, object> MyAttributes { get; set; }
public string Name { get; set; }
public string Id{ get; set; }
Using several dictionaries just doesn't look nice :) But might work in some scenarios.
If you are absolutely sure that string is enough for all - go with strings. But if some other code would need to parse it - that's going to be expensive.
If you want a really simple straightforward solution - just go with objects. Even though it would introduce boxing/unboxing for value types (forget it if you don't operate thousands of objects) and you'd lose type information on values this solution might still work just fine.
Also you might consider introducing an intermediate class for a value. Something like
public Dictionary<MyAttributeKeysEnum, PropertyBagValue> MyAttributes { get; set; }
public class PropertyBagValue
{
public object AsObject { get; set; }
public string AsString { get; set; }
public int AsInt { get; set; }
// ...
}
Internally you could store your value in a variable of the original type (int in an int variable, string in a string variable, etc., i.e. have a separate variable for each type) and then you can avoid type conversion. Also you could wrap your dictionary in another class, add some usefull accessors and make it look nicer. I don't know how does this fit into your infrastructure though.
How about making you DataContract class abstract and provide dictionaries with types you need in derived classes:
[DataContract]
[KnownType(typeof(My3dPartyObjectString))]
[KnownType(typeof(My3dPartyObjectInt64))]
public abstract class My3dPartyObjectBase
{
// some common properties
}
[DataContract]
public class My3dPartyObjectString : My3dPartyObjectBase
{
public Dictionary<3PAttributeKeysEnum, string> MyStringAttributes { get; set; }
}
[DataContract]
public class My3dPartyObjectInt64 : My3dPartyObjectBase
{
public Dictionary<3PAttributeKeysEnum, long> MyStringAttributes { get; set; }
}
Then client will have to analyse real type of returned object and get collection of attributes based on type. That would be close to your 3d option, but client will at least have some type safety at response-object level.