I am trying to create a generic wrapper library(C#/.NET) for AWS DynamoDB which can act as DAL(Data Access Layer). The applications consuming this library will not be tightly coupled with AWS libraries as there is a possibility that it can be changed later.
The structure of methods to be exposed from wrapper class are
InsertItem< T>(object) , UpdateItem< T>(object) , DeleteItem< T>(id/object),
List< T> GetAll() , T GetByParameter< T>(Id).
I see that there are three approaches to consume AWS DynamoDB services using AWSSDK.
Approach (1) : Low level Access - convert model to aws hashmap input structure and invoke getItem()/putItem() .
Approach (2) : High Level Access using Document - convert model to aws document model and passing document object to aws.
Approach (3) : High Level Access using Persistence - Using attribute DynamoDBTable in model to map model to dynamoDb table and using linq operation to get/update table.
In approach(1) & (2), i find it difficult to map the model to dynamoDB table. In approach (3), I see that i need to include the DynamoDB attributes in model class in application which would make it tightly coupled.
Is there any way to create mapping in runtime in this cases or is there any other approach?
I also thought whether i can json serialize/deserialize the model and inset into dynamoDB(In this case there would be only 2 columns - id, json body for any model).
Please correct me if i am wrong or missing something.
What follows is the solution I'm moving forward with, minus some extraneous details. The biggest challenge is reading data of arbitrary types, since you can't easily tell just from the JSON.
In my solution, the consuming application that chooses the types to write also knows how to identify which type to deserialize to when reading. This is necessary because I need to return multiple logs of different types, so it makes more sense to return the JSON to the consumer and let them deal with it. If you want to contain this in the DAL, you could add a Type column in the DB and convert it using Reflection, though I think you'd still have issues returning data for multiple types in one call.
The interface we present for consumption has read/write methods (you can add whatever else you need). Note that writing allows specification of T, but reading requires the caller to deserialize, as mentioned above.
public interface IDataAccess
{
Task WriteAsync<T>(Log<T> log) where T : class, new();
Task<IEnumerable<LogDb>> GetLogsAsync(long id);
}
A LogDb class contains the Persistence attributes:
[DynamoDBTable("TableName")]
public class LogDb
{
[DynamoDBHashKey("Id")]
public long Id{ get; set; }
[DynamoDBProperty(AttributeName = "Data", Converter = typeof(JsonStringConverter))]
public string DataJson { get; set; }
}
A generic Log<T> class used for strongly-typed writing. The ToDb() method is called in the IDataAccess implementation to actually write to the DB. The constructor taking in a LogDb would be used by a consuming application that had identified the appropriate type for deserialization:
public class Log<T>
where T : class, new()
{
public Log() { }
public Log(LogDb logDb)
{
Data = licensingLogDb.OldDataJson.Deserialize<T>();
Id = licensingLogDb.Id;
}
public long Id { get; set; }
public T Data { get; set; }
public LogDb ToDb()
{
string dataJson = Data.Serialize();
return new LogDb
{
DataJson = dataJson,
Id = Id
};
}
}
The JsonStringConverter used in the attributes on LogDb converts the JSON string in the DataJson property to and from a DynamoDB Document:
public class JsonStringConverter : IPropertyConverter
{
public DynamoDBEntry ToEntry(object value)
{
string json = value as string;
return !String.IsNullOrEmpty(json)
? Document.FromJson(json)
: null;
}
public object FromEntry(DynamoDBEntry entry)
{
var document = entry.AsDocument();
return document.ToJson();
}
}
A helper class provides the Serialize and Deserialize extensions, which use JSON.NET's JsonConvert.Serialize/Deserialize, but with null checks:
public static class JsonHelper
{
public static string Serialize(this object value)
{
return value != null
? JsonConvert.SerializeObject(value)
: String.Empty;
}
public static T Deserialize<T>(this string json)
where T : class, new()
{
return !String.IsNullOrWhiteSpace(json)
? JsonConvert.DeserializeObject<T>(json)
: null;
}
}
Generic method convert Dynamo table to c# class as an extension function.
public static List<T> ToMap<T>(this List<Document> item)
{
List<T> model = (List<T>)Activator.CreateInstance(typeof(List<T>));
foreach (Document doc in item)
{
T m = (T)Activator.CreateInstance(typeof(T));
var propTypes = m.GetType();
foreach (var attribute in doc.GetAttributeNames())
{
var property = doc[attribute];
if (property is Primitive)
{
var properties = propTypes.GetProperty(attribute);
if (properties != null)
{
var value = (Primitive)property;
if (value.Type == DynamoDBEntryType.String)
{
properties.SetValue(m, Convert.ToString(value.AsPrimitive().Value));
}
else if (value.Type == DynamoDBEntryType.Numeric)
{
properties.SetValue(m, Convert.ToInt32(value.AsPrimitive().Value));
}
}
}
else if (property is DynamoDBBool)
{
var booleanProperty = propTypes.GetProperty(attribute);
if (booleanProperty != null)
booleanProperty.SetValue(m, property.AsBoolean());
}
}
model.Add(m);
}
return model;
}
Related
I'm trying to persist the following class to DynamoDB using the .NET SDK:
public class MyClass
{
public string Id { get; set; }
public string Name { get; set; }
public object Settings { get; set; }
}
The problem is with the Settings property. It can be any type of object, and I do not know in advance what might be assigned to it. When I try to persist it to DynamoDB, I get the following exception:
System.InvalidOperationException: 'Type System.Object is unsupported, it has no supported members'
Both the Document Model and Object Persistence Model methods result in the same exception.
Is there a way to persist these objects in DynamoDB? Other databases like MongoDB and Azure DocumentDB will do this without any issue, and they can be deserialized to either the proper type with a discriminator, or as a dynamic JSON object.
You can use the general approach documented here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBContext.ArbitraryDataMapping.html
Here's my implementation for any arbitrary object:
public class DataConverter : IPropertyConverter
{
public object FromEntry(DynamoDBEntry entry)
{
var primitive = entry as Primitive;
if (primitive == null || !(primitive.Value is String) || string.IsNullOrEmpty((string)primitive.Value))
throw new ArgumentOutOfRangeException();
object ret = JsonConvert.DeserializeObject(primitive.Value as string);
return ret;
}
public DynamoDBEntry ToEntry(object value)
{
var jsonString = JsonConvert.SerializeObject(value);
DynamoDBEntry ret = new Primitive(jsonString);
return ret;
}
}
Then annotate your property like this:
[DynamoDBProperty(typeof(DataConverter))]
public object data { get; set; }
Little improvement to the previous answer: make converter generic so that you can deserialize to the correct type, like this:
public class SerializeConverter<T> : IPropertyConverter
{
public object FromEntry(DynamoDBEntry entry)
{
var primitive = entry as Primitive;
if (primitive is not { Value: string value } || string.IsNullOrEmpty(value))
throw new ArgumentException("Data has no value", nameof(entry));
return JsonConvert.DeserializeObject<T>(value);
}
public DynamoDBEntry ToEntry(object value) =>
new Primitive(JsonConvert.SerializeObject(value));
}
Usage:
[DynamoDBProperty(typeof(SerializeConverter<YourType>))]
public YourType data{ get; set; }
I struggled to find a good solution for interacting with thoroughly unstructured data, then eventually realized that the DynamoDBContext really isn't designed for that.
For anyone else who gets to this point, my advice is to drop to a lower abstraction level and use the AmazonDynamoDBClient directly with Dictionary<string, AttributeValue> objects.
I work an an automation team designing tests for electronic components. One thing our framework sorely needs is a single source point for our driver objects for the various pieces of test equipment at a workbench (right now, driver object creation is very wild-west).
Basically, the idea would be there would be one object, constructed based on a configuration file(s), which is the single place all other test code looks to to get the driver objects, based on a name string. I'll call it a "DriverSource" here.
The problem is, these drivers do not present similar interfaces at all. One might be a power supply (with methods like "SetVoltage" and "SetCurrentLimit"), while another might be a digital multimeter (with methods like "ReadVoltage" or "ReadCurrent").
The best solution I've come up with is to have a method with the following declaration:
public object GetDriver(string name);
Then, the test code using my "DriverSource" object would call that method, and then cast the System.Object to the correct driver type (or more accurately, the correct driver interface, like IPowerSupply).
I think casting like that is acceptable because whatever test code is about to use this driver had better know what the interface is. But I was hoping to get some input on whether or not this is an anti-pattern waiting to bite me. Any better pattern for solving this issue would also be greatly appreciated.
A final note: I think this is obvious, but performance is essentially a non-issue for this problem. Fetching the drivers is something will happen less than 100 times in a test run that can last hours.
If you already know the type and you're going to cast to an interface or class anyway, a better approach would be to hand the method call a type parameter.
public T GetDriver<T>(string name);
You can then use a Factory pattern to return you an object of the appropriate type from the method.
public T GetDriver<T>(string name)
{
switch(typeof(T).Name)
{
case "Foo":
// Construct and return a Foo object
case "Bar":
// Construct and return a Bar object
case "Baz":
// Construct and return a Baz object
default:
return default(T);
}
}
Usage:
var driver = GetDriver<Foo>(someString); // Returns a Foo object
If you really want to make this generic, I would use a factory pattern.
Lets start off by identifying the type structure:
public interface IDriver
{
}
public interface IPowerSupply : IDriver
{
void SetVoltage();
void SetCurrent();
}
public interface IMultimeter : IDriver
{
double MeasureVoltage();
}
Which you can add to or remove from as needed. Now we need a way for the factory to auto-discover the correct types and provide the configuration information to it. So lets create a custom attribute:
public class DriverHandlerAttribute : Attribute
{
public Type DriverType { get; set; }
public string ConfigurationName { get; set; }
}
And then we need a place to store configuration data. This type can contain whatever you want, like a dictionary of keys/values that are loaded from configuration files:
public class Configuration
{
public string DriverName { get; set; }
public string OtherSetting { get; set; }
}
Finally we can create a driver. Lets create an IPowerSupply:
[DriverHandler(DriverType = typeof(IPowerSupply), ConfigurationName="BaseSupply")]
public class BasePowerSupply : IPowerSupply
{
public BasePowerSupply(Configuration config) { /* ... */ }
public void SetVoltage() { /* ... */ }
public void SetCurrent() { /* ... */ }
}
The important part is that it is decorated with the attribute and that it has a constructor (although I created the factory so that it can use default constructors too):
public static class DriverFactory
{
public static IDriver Create(Configuration config)
{
Type driverType = GetTypeForDriver(config.DriverName);
if (driverType == null) return null;
if (driverType.GetConstructor(new[] { typeof(Configuration) }) != null)
return Activator.CreateInstance(driverType, config) as IDriver;
else
return Activator.CreateInstance(driverType) as IDriver;
}
public static T Create<T>(Configuration config) where T : IDriver
{
return (T)Create(config);
}
private static Type GetTypeForDriver(string driverName)
{
var type = (from t in Assembly.GetExecutingAssembly().GetTypes()
let attrib = t.GetCustomAttribute<DriverHandlerAttribute>()
where attrib != null && attrib.ConfigurationName == driverName
select t).FirstOrDefault();
return type;
}
}
So to use this, you would read in the configuration data (loaded from XML, read from a service, files, etc). You can then create the driver like:
var driver = DriverFactory.Create(configuration);
Or if you are using the generic method and you know the configuration is for a power supply, you can call:
var driver = DriverFactory.Create<IPowerSupply>(configuration);
And when you run your tests, you can verify that you get the right data back, for example, in your test method:
Assert.IsTrue(driver is IPowerSupply);
Assert.IsTrue(driver is BaseSupply);
Assert.DoesWhatever(((IPowerSupply)driver).SetVoltage());
And so-on and so-forth.
I would go with this code:
public T GetDriver<T>(string name)
{
return ((Func<string, T>)_factories[typeof(T)])(name);
}
The _factories object looks like this:
private Dictionary<Type, Delegate> _factories = new Dictionary<Type, Delegate>()
{
{ typeof(Foo), (Delegate)(Func<string, Foo>)(s => new Foo(s)) },
{ typeof(Bar), (Delegate)(Func<string, Bar>)(s => new Bar()) },
{ typeof(Baz), (Delegate)(Func<string, Baz>)(s => new Baz()) },
};
Basically the _factories dictionary contains all of the code to create each object type based on string parameter passed in. Note that in my example above the Foo class takes s as a constructor parameter.
The dictionary can also then be modified at run-time to suite your needs without needing to recompile code.
I would even go one step further. If you define this factory class:
public class Factory
{
private Dictionary<Type, Delegate> _factories = new Dictionary<Type, Delegate>();
public T Build<T>(string name)
{
return ((Func<string, T>)_factories[typeof(T)])(name);
}
public void Define<T>(Func<string, T> create)
{
_factories.Add(typeof(T), create);
}
}
You can then write this code:
var drivers = new Factory();
drivers.Define(s => new Foo(s));
drivers.Define(s => new Bar());
drivers.Define(s => new Baz());
var driver = drivers.Build<Foo>("foo");
I like that even better. It's strongly-typed and easily customized at run-time.
I want to create a key value table in my database along the lines of
public class KeyValue {
public string Id { get; set; }
public dynamic Value {get; set; }
}
Using a slightly modified SqlProvider I have no problems getting CreateTable<KeyValue>() to generate varchar(1024) Id, varchar(max) Value.
I have no issues saving objects to it. The problem is when I load the objects
var content = dbConn.GetById<KeyValue>("about");
content.Value at this point is a string.
Looking at the database record, the text for value does not appear to store any type information.
Is there really anything I can do better other than manually invoking ServiceStack.Text and call deserialize with the appropriate type information?
I do not need absolute dynamic, my actual use case is for polymorphism with a base class instead of dynamic. So I don't really care what type Value is whether it's the base class, dynamic, object, etc. Regardless other than using the class
public class KeyValue {
public string Id { get; set; }
public MySpecificChildType Value {get; set; }
}
I haven't been able to get anything other than a string back for Value. Can I tell OrmLite to serialize the type information to be able to correctly deserialize my objects or do I just have to do it manually?
Edit: some further information. OrmLite is using the Jsv serializer defined by ServiceStack.Text.TypeSerializer and is in no way pluggable in the BSD version. If I add a Type property to my KeyValue class with the dynamic Value I can do
var value = content.Value as string;
MySpecificChildType strongType =
TypeSerializer.DeserializeFromString(content, content.Type);
I just really want a better way to do this, I really don't like an object of 1 type going into the db coming back out with a different type (string).
I haven't worked much with the JsvSerializer but with the JsonSerializer you can achieve this (in a few different ways) and as of ServiceStack 4.0.11 you can opt to use the JsonSerializer instead, see https://github.com/ServiceStack/ServiceStack/blob/master/release-notes.md#v4011-release-notes.
Example
public abstract class BaseClass {
//Used for second example of custom type lookup
public abstract string Type { get; set; }
}
public class ChildA : BaseClass {
//Used for second example of custom type lookup
public override string Type { get; set; }
public string PropA { get; set; }
}
And then in your init/bootstrap class you can configure the serializer to emit the type information needed for proper deserialization:
public class Bootstrapper {
public void Init() {
ServiceStack.Text.JsConfig.ExcludeTypeInfo = false;
ServiceStack.Text.JsConfig.IncludeTypeInfo = true;
}
}
If you wish to use something other that the default "__type" attribute that ServiceStack uses (if you for example want to have a friendly name identifying the type rather then namespace/assembly) you can also configure your own custom type lookup as such
public class Bootstrapper {
public void Init() {
ServiceStack.Text.JsConfig.ExcludeTypeInfo = false;
ServiceStack.Text.JsConfig.IncludeTypeInfo = true;
ServiceStack.Text.JsConfig.TypeAttr = "type";
ServiceStack.Text.JsConfig.TypeFinder = type =>
{
if ("CustomTypeName".Equals(type, StringComparison.OrdinalIgnoreCase))
{
return typeof(ChildA);
}
return typeof(BaseClass);
}
}
}
I would like to deserialize my Json in two step because I have to read the first part to know what kind of Object it is.
I have to read this kind of Json :
{"header":3,"data":{"result":"myResult"}}
it's more readable like that
{
"header":3,
"data":{
"result":"myResult"
}
}
I deserialize this Json in a class named ProtocolHeader :
public class ProtocolHeader
{
[JsonProperty("header")]
public int Header { get; set; }
[JsonProperty("data")]
public string Data { get; set; }
}
To do this I use this code :
JsonConvert.DeserializeObject<ProtocolHeader>(Json)
Depending on the value of the Header, I will choose different class to deserialize the end of the file.
For example, I could have another class
public class ProtocolResult
{
[JsonProperty("result")]
public string Result{ get; set; }
}
or like that
public class ProtocolError
{
[JsonProperty("errorNumber")]
public int ErrorNumber{ get; set; }
[JsonProperty("additionalInformation")]
public string AdditionalInformation{ get; set; }
}
Do you have an idea to seperate the Deserialize Object in two steps ?
Thanks
You could make 3 classes.
One common class (not base) which has all of the fields, then a ProtocolResult & ProtocolError
Then implement an implicit cast to each.
You could also put a IsError getter on to your common class to decide how to use it.
You can use a reader to only read as long as you need, then skip out of the reader and do your real deserialization.
Probably not a whole lot better than deserializing into a simple object first then a real object later, but it's an alternative.
You can probably tweak this a bit.
string json = #"{""header"":3,""data"":{""result"":""myResult""}}";
using (var stringReader = new StringReader(json))
{
using (var jsonReader = new JsonTextReader(stringReader))
{
while (jsonReader.Read())
{
if (jsonReader.TokenType == JsonToken.PropertyName
&& jsonReader.Value != null
&& jsonReader.Value.ToString() == "header")
{
jsonReader.Read();
int header = Convert.ToInt32(jsonReader.Value);
switch (header)
{
case 1:
// Deserialize as type 1
break;
case 2:
// Deserialize as type 2
break;
case 3:
// Deserialize as type 3
break;
}
break;
}
}
}
}
Option 1: Without using an abstract base class for your data classes.
The easiest I've found to do this is as follows:
Declare your class using JToken as the field type for the unknown object.
[JsonObject(MemberSerialization.OptIn)]
public class ProtocolHeader
{
[JsonProperty("header")]
private int _header;
[JsonProperty("data")]
private JToken _data;
}
Expose the specialized data in properties.
public ProtocolResult Result
{
get
{
if (_data == null || _header != ResultHeaderValue)
return null;
return _data.ToObject<ProtocolResult>();
}
}
public ProtocolError Error
{
get
{
if (_data == null || _header != ErrorHeaderValue)
return null;
return _data.ToObject<ProtocolError>();
}
}
Option 2: With using an abstract base class for your data classes.
Another option is to create an abstract base class for the various data types, and create a static method in the abstract base class to perform the type selection and proper deserialization. This is particularly useful when the type information is contained in the object itself (e.g. if header was a property inside the data object).
The LoadBalancerConfiguration<T>._healthMonitor field has the type JObject, but the HealthMonitor property in the same class returns a HealthMonitor object.
The HealthMonitor.FromJObject method performs the actual deserialization.
I'm trying to grasp how Azure table storage works to create facebook-style feeds and I'm stuck on how to retrieve the entries.
(My questions is almost the same as https://stackoverflow.com/questions/6843689/retrieve-multiple-type-of-entities-from-azure-table-storage but the link in the answer is broken.)
This is my intended approach:
Create a personal feed for all users within my application which can contain different types of entries (notification, status update etc). My idea is to store them in an Azure Table grouped by a partition key for each user.
Retrieve all entries within the same partition key and pass it to different views depending on entry type.
How do I query the table storage for all types of the same base type
while keeping their unique properties?
The CloudTableQuery<TElement> requires a typed entity, if I specify EntryBase as generic argument I don't get the entry-specific properties (NotificationSpecificProperty, StatusUpdateSpecificProperty) and vice versa.
My entities:
public class EntryBase : TableServiceEntity
{
public EntryBase()
{
}
public EntryBase(string partitionKey, string rowKey)
{
this.PartitionKey = partitionKey;
this.RowKey = rowKey;
}
}
public class NotificationEntry : EntryBase
{
public string NotificationSpecificProperty { get; set; }
}
public class StatusUpdateEntry : EntryBase
{
public string StatusUpdateSpecificProperty { get; set; }
}
My query for a feed:
List<AbstractFeedEntry> entries = // how do I fetch all entries?
foreach (var item in entries)
{
if(item.GetType() == typeof(NotificationEntry)){
// handle notification
}else if(item.GetType() == typeof(StatusUpdateEntry)){
// handle status update
}
}
Finally there's a official way! :)
Look at the NoSQL sample which does exactly this in this link from the Azure Storage Team Blog:
Windows Azure Storage Client Library 2.0 Tables Deep Dive
There are a few ways to go about this and how you do it depends a bit on your personal preference as well as potentially performance goals.
Create an amalgamated class that represents all queried types. If I had StatusUpdateEntry and a NotificationEntry, then I would simply merge each property into a single class. The serializer will
automatically fill in the correct properties and leave the others null (or default). If you also put a 'type' property on the entity (calculated or set in storage), you could easily switch on that type. Since I always recommend mapping from table entity to your own type in the app, this works fine as well (the class only becomes used for DTO).
Example:
[DataServiceKey("PartitionKey", "RowKey")]
public class NoticeStatusUpdateEntry
{
public string PartitionKey { get; set; }
public string RowKey { get; set; }
public string NoticeProperty { get; set; }
public string StatusUpdateProperty { get; set; }
public string Type
{
get
{
return String.IsNullOrEmpty(this.StatusUpdateProperty) ? "Notice" : "StatusUpate";
}
}
}
Override the serialization process. You can do this yourself by hooking the ReadingEntity event. It gives you the raw XML and you can choose to serialize however you want. Jai Haridas and Pablo Castro gave some example code for reading an entity when you don't know the type (included below), and you can adapt that to read specific types that you do know about.
The downside to both approaches is that you end up pulling more data than you need in some cases. You need to weigh this on how much you really want to query one type versus another. Keep in mind you can use projection now in Table storage, so that also reduces the wire format size and can really speed things up when you have larger entities or many to return. If you ever had the need to query only a single type, I would probably use part of the RowKey or PartitionKey to specify the type, which would then allow me to query only a single type at a time (you could use a property, but that is not as efficient for query purposes as PK or RK).
Edit: As noted by Lucifure, another great option is to design around it. Use multiple tables, query in parallel, etc. You need to trade that off with complexity around timeouts and error handling of course, but it is a viable and often good option as well depending on your needs.
Reading a Generic Entity:
[DataServiceKey("PartitionKey", "RowKey")]
public class GenericEntity
{
public string PartitionKey { get; set; }
public string RowKey { get; set; }
Dictionary<string, object> properties = new Dictionary<string, object>();
internal object this[string key]
{
get
{
return this.properties[key];
}
set
{
this.properties[key] = value;
}
}
public override string ToString()
{
// TODO: append each property
return "";
}
}
void TestGenericTable()
{
var ctx = CustomerDataContext.GetDataServiceContext();
ctx.IgnoreMissingProperties = true;
ctx.ReadingEntity += new EventHandler<ReadingWritingEntityEventArgs>(OnReadingEntity);
var customers = from o in ctx.CreateQuery<GenericTable>(CustomerDataContext.CustomersTableName) select o;
Console.WriteLine("Rows from '{0}'", CustomerDataContext.CustomersTableName);
foreach (GenericEntity entity in customers)
{
Console.WriteLine(entity.ToString());
}
}
// Credit goes to Pablo from ADO.NET Data Service team
public void OnReadingEntity(object sender, ReadingWritingEntityEventArgs args)
{
// TODO: Make these statics
XNamespace AtomNamespace = "http://www.w3.org/2005/Atom";
XNamespace AstoriaDataNamespace = "http://schemas.microsoft.com/ado/2007/08/dataservices";
XNamespace AstoriaMetadataNamespace = "http://schemas.microsoft.com/ado/2007/08/dataservices/metadata";
GenericEntity entity = args.Entity as GenericEntity;
if (entity == null)
{
return;
}
// read each property, type and value in the payload
var properties = args.Entity.GetType().GetProperties();
var q = from p in args.Data.Element(AtomNamespace + "content")
.Element(AstoriaMetadataNamespace + "properties")
.Elements()
where properties.All(pp => pp.Name != p.Name.LocalName)
select new
{
Name = p.Name.LocalName,
IsNull = string.Equals("true", p.Attribute(AstoriaMetadataNamespace + "null") == null ? null : p.Attribute(AstoriaMetadataNamespace + "null").Value, StringComparison.OrdinalIgnoreCase),
TypeName = p.Attribute(AstoriaMetadataNamespace + "type") == null ? null : p.Attribute(AstoriaMetadataNamespace + "type").Value,
p.Value
};
foreach (var dp in q)
{
entity[dp.Name] = GetTypedEdmValue(dp.TypeName, dp.Value, dp.IsNull);
}
}
private static object GetTypedEdmValue(string type, string value, bool isnull)
{
if (isnull) return null;
if (string.IsNullOrEmpty(type)) return value;
switch (type)
{
case "Edm.String": return value;
case "Edm.Byte": return Convert.ChangeType(value, typeof(byte));
case "Edm.SByte": return Convert.ChangeType(value, typeof(sbyte));
case "Edm.Int16": return Convert.ChangeType(value, typeof(short));
case "Edm.Int32": return Convert.ChangeType(value, typeof(int));
case "Edm.Int64": return Convert.ChangeType(value, typeof(long));
case "Edm.Double": return Convert.ChangeType(value, typeof(double));
case "Edm.Single": return Convert.ChangeType(value, typeof(float));
case "Edm.Boolean": return Convert.ChangeType(value, typeof(bool));
case "Edm.Decimal": return Convert.ChangeType(value, typeof(decimal));
case "Edm.DateTime": return XmlConvert.ToDateTime(value, XmlDateTimeSerializationMode.RoundtripKind);
case "Edm.Binary": return Convert.FromBase64String(value);
case "Edm.Guid": return new Guid(value);
default: throw new NotSupportedException("Not supported type " + type);
}
}
Another option, of course, is to have only a single entity type per table, query the tables in parallel and merge the result sorted by timestamp.
In the long run this may prove to be the more prudent choice with reference to scalability and maintainability.
Alternatively you would need to use some flavor of generic entities as outlined by ‘dunnry’, where the non-common data is not explicitly typed and instead persisted via a dictionary.
I have written an alternate Azure table storage client, Lucifure Stash, which supports additional abstractions over azure table storage including persisting to/from a dictionary, and may work in your situation if that is the direction you want to pursue.
Lucifure Stash supports large data columns > 64K, arrays & lists, enumerations, composite keys, out of the box serialization, user defined morphing, public and private properties and fields and more. It is available free for personal use at http://www.lucifure.com or via NuGet.com.
Edit: Now open sourced at CodePlex
Use DynamicTableEntity as the entity type in your queries. It has a dictionary of properties you can look up. It can return any entity type.