How to avoid a long case statement - c#

I have an entity (Entity Framework 6) which maps to a table which has 100 columns named col_1, col_2 etc. The table is an existing legacy table which is used to interface to another system, so it's very generic and will never change.
So the entity has 100 properties which map to those columns.
public class DataEntity{
public string Column1{get;set;}
...
}
I then have data like this:
var columnNumber = 1;
var data = "The data";
How can I set col_1 to "The Data" without a long case statement?
var dataEntity = new DataEntity();
This is what I don't want to do:
switch(columnNumber)
{
case 1:
dataEntity.Column1 = data;
break;
}
Note that it is not possible to change the structure of the entity.

If you absolutely cannot change the structure of the data and you do not want to use a switch statement -- I'd probably look into some implementation of a strategy pattern:
http://www.dofactory.com/net/strategy-design-pattern
If you do not want to go that route, you could use reflection to set the values of the entity by following a convention where the columns will always be named "col_1", etc., but this is extremely fragile.
var myEntity = new MyEntity();
var value = "The data";
var columnNumber = 1;
PropertyInfo propertyInfo = MyEntity.GetType().GetProperty(string.Format("Col_{0}", columnNumber));
propertyInfo.SetValue(myEntity, Convert.ChangeType(value, propertyInfo.PropertyType), null);

I would definitely use an ExpandoObject for pushing anything I want into a dummy object, then map it to a specific instance of your class...
The mapper code you can get it from this answer:
https://codereview.stackexchange.com/questions/1002/mapping-expandoobject-to-another-object-type
Let's say you have a class representing your holy-mother-of-100columns-database-tables
class LegacyTable
{
public string Column1 { get; set; }
public string Column2 { get; set; }
public string Column3 { get; set; }
// More to come
}
Now you can just work easily on a dynamic object.
dynamic expando = new ExpandoObject();
expando.Column1 = "Data 1";
expando.Column2 = "Data 2";
// Etcetera
You could also cast the expando to a dictionary for fast access (specially if your properties follow a patter "Column" + number
var p = expando as IDictionary<String, object>;
p["Column" + 1] = "Data 1";
p["Column" + 2] = "Data 2";
p["Column" + 3] = "Data 3";
Now you can use the mapper to pas your props to an actual instance of the legacy class.
LegacyTable legacy = new LegacyTable();
Mapper<LegacyTable>.Map(expando, legacy);
It just works...

Related

Linq to Collection - Dynamic Where Clause

I have a WPF applicaion of which i would like to populate a treeview. Before this happens I would like the user to be able to select any of the fields/properties available in the collection from a drop down that will then populate the tree with data and grouped appropriately.
EXAMPLE Object
public class class_01{
public string Property_A {get; set;}
public string Property_B {get; set;}
public string Property_C {get; set;}
}
EXAMPLE Collection
List<class_01> list_01 = new List<class_01>();
Again, the drop down will be bound with what ever properties are available from the list. This way if the list where to change the application would not require modification. This is a big requirement for the answer i need.
Lets say the user selects "Property_A".
I would like a linq query method that looks something like this.
Linq Query
public groupingModel getGrouping(string groupBy) // groupby equals property A in this example
{
List<class_01> list = getList(); //Returns the list of data of type class_01
var q = from x in w where ????? == groupBy select x; // I dont want to specify a specific property but rather have one applied dynamically
return q;
}
I have a custom object that the query would then be parsed into. that looks similar to the following.
Custom Object
public class class_02{
public string header {get; set;} // will be set to the discrete values of the selected groupby property
public List<class_01> prop_a {get; set;}
}
This would then be bound to the tree appropriately.
Any Thoughts?
EDIT
Additionally how would I get a list of the unique value for the property the user selects.
for example
{a = 1, b =2, c =3}, {a = 2, b = 3, c = 4}
if the user decides to group on property "a" how would we produce a collection of [1,2]?
This will be needed to construct a where clause.
foreach(value of user selected property){
string whereClause = string.format("{0} = {1}",selected property, value")
}
EDIT - Catching exception from Dynamic Query
public List<groupingModel> getGrouping(string groupBy)
{
List<groupingModel> lgm = new List<groupingModel>();
//Categories.Select(c => c.Id).ToList()
var w2 = getWVWellsList();
//var v = w2.Select(groupBy).Distinct().Cast<string>().ToArray();
var v = w2.Select(groupBy).Distinct();
foreach(string val in v)
{
string whereClause = string.Format("{0} = {1}", groupBy, val);
try
{
IEnumerable<WVWellModel> q2 = w2.Where(whereClause);
List<WVWellModel> l = q2.ToList<WVWellModel>();
lgm.Add(new groupingModel { Header = val, Wells = l });
}
catch (Exception e)
{
MessageBox.Show(e.Message, "Query Error", MessageBoxButton.OK, MessageBoxImage.Error);
throw new Exception("Generic Exception - Issue With Group By Query", e);
}
}
return lgm;
}
Exception
"No Property or field "Colorado" Exists in type WVWellModel"
In the case of this example i can confirm that my where clause was "State = Colorado". It appears the query is applying the value as opposed to the property state which is apart of that type. It is as if it is reversed when the query is called.
Check at the post of Scott Gu
https://weblogs.asp.net/scottgu/dynamic-linq-part-1-using-the-linq-dynamic-query-library

How to bulk insert generic source array into arbitrary SQL table

My application uses Entity Framework. Most days I feel fine about that, but I have trouble when I need to do bulk insertions. I've yet to find a way for EF to do these quickly. As such, I need a solution that goes outside of EF to do the updates.
I want a method that takes a connection string, destination table name, and a generic array of source data, and performs the bulk insertions. Moreover, I'd like it to map the properties of the source data to the specific table fields, ideally without requiring attributes in the source object to designate the table field.
Thus, for this object:
public class Customer
{
//property populated & used only in memory
public string TempProperty { get; set; }
//properties saved in the database
public string Name { get; set; }
public string Address { get; set; }
public int Age { get; set; }
public string Comments { get; set; }
//various methods, constructors, etc.
}
I should be able to provide the table name Data.Customer, and the method should map Customer.Name -> Data.Customers.Name, Customer.Address -> Data.Customers.Address, etc.
I was surprised I couldn't find the code for something like this anywhere. Maybe this isn't the best approach to the problem? That possibility notwithstanding, here's the solution I came up with.
My starting outline was something like this:
Use the destination table and source object type to create mapping object using reflection.
Iterate through the source objects, applying the mapper, to generate the insertion data.
Insert the data into the destination table using the System.Data.SqlClient.SqlBulkCopy object.
When I see an outline like this, I try to rephrase it in terms of data types, because really all I'm doing is translating my input (T[]) into something that SqlBulkCopy will accept. But to do this, I need to decide on how to map fields.
Here's what I settled on:
var mapper = new List<Tuple<string, Func<object, string, object>>>();
I chose this to represent a table that looks like this:
+------------+----------------------------------------------+
| Field Name | Mapping Function |
+------------+----------------------------------------------+
| Name | Customer.Name -> Data.Customers.Name |
| Address | Customer.Address -> Data.Customers.Address |
| Age | Customer.Age -> Data.Customers.Age |
| Comments | Customer.Comments -> Data.Customers.Comments |
+------------+----------------------------------------------+
The fact that it's a list represents the rows. That leaves us with
Tuple<string, Func<object, string, object>>. This creates a sort of Dictionary (but indexed) where a given string (the field name) is mapped to a function that, when given a source object T and a source field (e.g. Address), will retrieve the appropriate value. If no corresponding property is found for a table field, we'll just return null.
After validating that the input values are valid (connection valid, table exists, etc.), we'll create our mapping object:
//get all the column names for the table to build mapping object
SqlCommand command = new SqlCommand($"SELECT TOP 1 * FROM {foundTableName}", conn);
SqlDataReader reader = command.ExecuteReader();
//build mapping object by iterating through rows and verifying that there is a match in the table
var mapper = new List<Tuple<string, Func<object, string, object>>>();
foreach (DataRow col in reader.GetSchemaTable().Rows)
{
//get column information
string columnName = col.Field<string>("ColumnName");
PropertyInfo property = typeof(T).GetProperty(columnName);
Func<object, string, object> map;
if (property == null)
{
//check if it's nullable and exit if not
bool nullable = col.Field<bool>("Is_Nullable");
if (!nullable)
return $"No corresponding property found for Non-nullable field '{columnName}'.";
//if it's nullable, create mapping function
map = new Func<object, string, object>((a, b) => null);
}
else
map = new Func<object, string, object>((src, fld) => typeof(T).GetProperty(fld).GetValue(src));
//add mapping object
mapper.Add(new Tuple<string, Func<object, string, object>>(columnName, map));
}
The SqlBulkCopy object accepts DataRow[] as an input, so from here we can simply create template DataRow objects from the destination table and fill them from our mapper:
//get all the data
int dataCount = sourceData.Count();
var rows = new DataRow[dataCount];
DataTable destTableDT = new DataTable();
destTableDT.Load(reader);
for (int x = 0; x < dataCount; x++)
{
var dataRow = destTableDT.NewRow();
dataRow.ItemArray = mapper.Select(m => m.Item2.Invoke(sourceData[x], m.Item1))
.ToArray();
rows[x] = dataRow;
}
Then finish it out by writing the data:
//set up the bulk copy connection
SqlBulkCopy sbc = new SqlBulkCopy(conn, SqlBulkCopyOptions.TableLock |
SqlBulkCopyOptions.UseInternalTransaction, null);
sbc.DestinationTableName = foundTableName;
sbc.BatchSize = BATCH_SIZE;
sbc.WriteToServer(rows);
And that's it! Works like a charm, and ran too fast for me to bother benchmarking (EF was taking several minutes to run these imports).
I should note that I left out a lot of my error checking code and will probably be adding more. However, if you'd like to see my class in its entirety, here it is:
using System;
using System.Collections.Generic;
using System.Data;
using System.Data.Common;
using System.Data.SqlClient;
using System.Linq;
using System.Reflection;
namespace DatabaseUtilities
{
public static class RapidDataTools
{
const int SCHEMA_SCHEMA_NAME = 1;
const int SCHEMA_TABLE_NAME = 2;
const int BATCH_SIZE = 1000;
/// <summary>
/// Imports an array of data into a specified table. It does so by mapping object properties
/// to table columns. Only properties with the same name as the column name will be copied;
/// other columns will be left null. Non-nullable columns with no corresponding property will
/// throw an error.
/// </summary>
/// <param name="connectionString"></param>
/// <param name="destTableName">Qualified table name (e.g. Admin.Table)</param>
/// <param name="sourceData"></param>
/// <returns></returns>
public static string Import<T>(string connectionString, string destTableName, T[] sourceData)
{
//get destination table qualified name
string[] tableParts = destTableName.Split('.');
if (tableParts.Count() != 2) return $"Invalid or unqualified destination table name: {destTableName}.";
string destSchema = tableParts[0];
string destTable = tableParts[1];
//create the database connection
SqlConnection conn = GetConnection(connectionString);
if (conn == null) return "Invalid connection string.";
//establish connection
try { conn.Open(); }
catch { return "Could not connect to database using provided connection string."; }
//make sure the requested table exists
string foundTableName = string.Empty;
foreach (DataRow row in conn.GetSchema("Tables").Rows)
if (row[SCHEMA_SCHEMA_NAME].ToString().Equals(destSchema, StringComparison.CurrentCultureIgnoreCase) &&
row[SCHEMA_TABLE_NAME].ToString().Equals(destTable, StringComparison.CurrentCultureIgnoreCase))
{
foundTableName = $"{row[SCHEMA_SCHEMA_NAME]}.{row[SCHEMA_TABLE_NAME]}";
break;
}
if (foundTableName == string.Empty) return $"Specified table '{destTableName}' could not be found in table.";
//get all the column names for the table to build mapping object
SqlCommand command = new SqlCommand($"SELECT TOP 1 * FROM {foundTableName}", conn);
SqlDataReader reader = command.ExecuteReader();
//build mapping object by iterating through rows and verifying that there is a match in the table
var mapper = new List<Tuple<string, Func<object, string, object>>>();
foreach (DataRow col in reader.GetSchemaTable().Rows)
{
//get column information
string columnName = col.Field<string>("ColumnName");
PropertyInfo property = typeof(T).GetProperty(columnName);
Func<object, string, object> map;
if (property == null)
{
//check if it's nullable and exit if not
bool nullable = col.Field<bool>("Is_Nullable");
if (!nullable)
return $"No corresponding property found for Non-nullable field '{columnName}'.";
//if it's nullable, create mapping function
map = new Func<object, string, object>((a, b) => null);
}
else
map = new Func<object, string, object>((src, fld) => typeof(T).GetProperty(fld).GetValue(src));
//add mapping object
mapper.Add(new Tuple<string, Func<object, string, object>>(columnName, map));
}
//get all the data
int dataCount = sourceData.Count();
var rows = new DataRow[dataCount];
DataTable destTableDT = new DataTable();
destTableDT.Load(reader);
for (int x = 0; x < dataCount; x++)
{
var dataRow = destTableDT.NewRow();
dataRow.ItemArray = mapper.Select(m => m.Item2.Invoke(sourceData[x], m.Item1)).ToArray();
rows[x] = dataRow;
}
//close the old connection
conn.Close();
//set up the bulk copy connection
SqlBulkCopy sbc = new SqlBulkCopy(conn, SqlBulkCopyOptions.TableLock | SqlBulkCopyOptions.UseInternalTransaction, null);
sbc.DestinationTableName = foundTableName;
sbc.BatchSize = BATCH_SIZE;
//establish connection
try { conn.Open(); }
catch { return "Failed to re-established connection to the database after reading data."; }
//write data
try { sbc.WriteToServer(rows); }
catch (Exception ex) { return $"Batch write failed. Details: {ex.Message} - {ex.StackTrace}"; }
//if we got here, everything worked!
return string.Empty;
}
private static SqlConnection GetConnection(string connectionString)
{
DbConnectionStringBuilder csb = new DbConnectionStringBuilder();
try { csb.ConnectionString = connectionString; }
catch { return null; }
return new SqlConnection(csb.ConnectionString);
}
}
}
Take a look at NuGet package "EntityFramework.Utilities", you can add it to your project. It has features for this. You'll have to do some recoding, but this will probably benefit you in the end.
References:
https://www.nuget.org/packages/EFUtilities/ (package)
https://github.com/MikaelEliasson/EntityFramework.Utilities (source +
examples)
There is basically 3 approach for performing Bulk Insert in Entity Framework
Entity Framework Extensions (Recommended)
EntityFramework.BulkInsert (Limited and unsupported)
SqlBulkCopy (Complex to make it work)
The first 2 library add extension methods to the DbContext
using (var ctx = new EntitiesContext())
{
ctx.BulkInsert(list);
}
You can learn more about this three techniques in this article
Disclaimer: I'm the owner of the project Entity Framework Extensions

Updating entire node with mutating cypher in Neo4jclient

I need to update all the properties of a given node, using mutating cypher. I want to move away from Node and NodeReference because I understand they are deprecated, so can't use IGraphClient.Update. I'm very new to mutating cypher. I'm writing in C#, using Neo4jclient as the interface to Neo4j.
I did the following code which updates the "Name" property of a "resunit" where property "UniqueId" equals 2. This works fine. However,
* my resunit object has many properties
* I don't know which properties have changed
* I'm trying to write code that will work with different types of objects (with different properties)
It was possible with IGraphClient.Update to pass in an entire object and it would take care of creating cypher that sets all properies.
Can I somehow pass in my object with mutating cypher as well?
The only alternative I can see is to reflect over the object to find all properties and generate .Set for each, which I'd like to avoid. Please tell me if I'm on the wrong track here.
string newName = "A welcoming home";
var query2 = agencyDataAccessor
.GetAgencyByKey(requestingUser.AgencyKey)
.Match("(agency)-[:HAS_RESUNIT_NODE]->(categoryResUnitNode)-[:THE_UNIT_NODE]->(resunit)")
.Where("resunit.UniqueId = {uniqueId}")
.WithParams(new { uniqueId = 2 })
.With("resunit")
.Set("resunit.Name = {residentialUnitName}")
.WithParams(new { residentialUnitName = newName });
query2.ExecuteWithoutResults();
It is indeed possible to pass an entire object! Below I have an object called Thing defined as such:
public class Thing
{
public int Id { get; set; }
public string Value { get; set; }
public DateTimeOffset Date { get; set; }
public int AnInt { get; set; }
}
Then the following code creates a new Thing and inserts it into the DB, then get's it back and updates it just by using one Set command:
Thing thing = new Thing{AnInt = 12, Date = new DateTimeOffset(DateTime.Now), Value = "Foo", Id = 1};
gc.Cypher
.Create("(n:Test {thingParam})")
.WithParam("thingParam", thing)
.ExecuteWithoutResults();
var thingRes = gc.Cypher.Match("(n:Test)").Where((Thing n) => n.Id == 1).Return(n => n.As<Thing>()).Results.Single();
Console.WriteLine("Found: {0},{1},{2},{3}", thingRes.Id, thingRes.Value, thingRes.AnInt, thingRes.Date);
thingRes.AnInt += 100;
thingRes.Value = "Bar";
thingRes.Date = thingRes.Date.AddMonths(1);
gc.Cypher
.Match("(n:Test)")
.Where((Thing n) => n.Id == 1)
.Set("n = {thingParam}")
.WithParam("thingParam", thingRes)
.ExecuteWithoutResults();
var thingRes2 = gc.Cypher.Match("(n:Test)").Where((Thing n) => n.Id == 1).Return(n => n.As<Thing>()).Results.Single();
Console.WriteLine("Found: {0},{1},{2},{3}", thingRes2.Id, thingRes2.Value, thingRes2.AnInt, thingRes2.Date);
Which gives:
Found: 1,Foo,12,2014-03-27 15:37:49 +00:00
Found: 1,Bar,112,2014-04-27 15:37:49 +00:00
All properties nicely updated!

How to create own dynamic type or dynamic object in C#?

There is, for example, the ViewBag property of ControllerBase class and we can dynamically get/set values and add any number of additional fields or properties to this object, which is cool. I want to use something like that, beyond MVC application and Controller class in other types of applications. When I tried to create dynamic object and set its property like this:
1. dynamic MyDynamic = new { A="a" };
2. MyDynamic.A = "asd";
3. Console.WriteLine(MyDynamic.A);
I've got RuntimeBinderException with message Property or indexer '<>f__AnonymousType0.A' cannot be assigned to -- it is read only in line 2. Also, I suspect it's not quite what I'm looking for. Maybe is there some class which allows me to do something like:
??? MyDynamic = new ???();
MyDynamic.A = "A";
MyDynamic.B = "B";
MyDynamic.C = DateTime.Now;
MyDynamic.TheAnswerToLifeTheUniverseAndEverything = 42;
with dynamic adding and setting properties.
dynamic MyDynamic = new System.Dynamic.ExpandoObject();
MyDynamic.A = "A";
MyDynamic.B = "B";
MyDynamic.C = "C";
MyDynamic.Number = 12;
MyDynamic.MyMethod = new Func<int>(() =>
{
return 55;
});
Console.WriteLine(MyDynamic.MyMethod());
Read more about ExpandoObject class and for more samples: Represents an object whose members can be dynamically added and removed at run time.
I recently had a need to take this one step further, which was to make the property additions in the dynamic object, dynamic themselves, based on user defined entries. The examples here, and from Microsoft's ExpandoObject documentation, do not specifically address adding properties dynamically, but, can be surmised from how you enumerate and delete properties. Anyhow, I thought this might be helpful to someone. Here is an extremely simplified version of how to add truly dynamic properties to an ExpandoObject (ignoring keyword and other handling):
// my pretend dataset
List<string> fields = new List<string>();
// my 'columns'
fields.Add("this_thing");
fields.Add("that_thing");
fields.Add("the_other");
dynamic exo = new System.Dynamic.ExpandoObject();
foreach (string field in fields)
{
((IDictionary<String, Object>)exo).Add(field, field + "_data");
}
// output - from Json.Net NuGet package
textBox1.Text = Newtonsoft.Json.JsonConvert.SerializeObject(exo);
dynamic myDynamic = new { PropertyOne = true, PropertyTwo = false};
ExpandoObject is what are you looking for.
dynamic MyDynamic = new ExpandoObject(); // note, the type MUST be dynamic to use dynamic invoking.
MyDynamic.A = "A";
MyDynamic.B = "B";
MyDynamic.C = "C";
MyDynamic.TheAnswerToLifeTheUniverseAndEverything = 42;
var data = new { studentId = 1, StudentName = "abc" };
Or value is present
var data = new { studentId, StudentName };
You can use ExpandoObject Class which is in System.Dynamic namespace.
dynamic MyDynamic = new ExpandoObject();
MyDynamic.A = "A";
MyDynamic.B = "B";
MyDynamic.C = "C";
MyDynamic.SomeProperty = SomeValue
MyDynamic.number = 10;
MyDynamic.Increment = (Action)(() => { MyDynamic.number++; });
More Info can be found at
ExpandoObject MSDN
dynamic MyDynamic = new ExpandoObject();

Linq To Data set error System.InvalidCastException: Specified cast is not valid

I get the following exception using Linq to data set "System.InvalidCastException: Specified cast is not valid."
The problem is as follows I have a model with two value of type int?. The values in the database are not required so some fields are blank. I have read the table into a data set and now I need to query the data set using the following code.
//model
public class Model
{
// Public Properties
...
...
...
public int? YearBegin { get; set; }
public int? YearEnd { get; set; }
}
//query
var list = from m in data.Tables["Models"].AsEnumerable()
select new Model
{
// rest of members omitted to simplify
YearBegin = m.Field<int>("YearBegin"),
YearEnd = m.Field<int>("YearEnd")
};
I have tried the following none have worked:
m.Field<int?>("YearBegin")
YearEnd = m.IsNull("YearEnd") ? null, m.Field<int>("YearEnd")
Is there another way to check if the field has a value similar to String.IsNullOrEmpty().
Using string as the type is not a possibility...
Thanks
You aren't using a typed DataSet, so my first question would be is the does the DataTable know that those fields are supposed to be 'int?' in the first place, or are they listed as strings? If the DataTable is treating those fields as strings, you will experience that error. The following code assumes a TestData DataSet with a Models DataRow, with two nullable string columns as YearBegin and YearEnd:
using (TestData ds = new TestData())
{
// Typed Rows
ds.Models.AddModelsRow("1", "2");
ds.Models.AddModelsRow(ds.Models.NewModelsRow()); // NULL INFO TEST
// Untyped rows
DataRow r = ds.Models.NewRow();
r[0] = "4";
r[1] = "5";
ds.Models.Rows.Add(r);
//query
var list = from m in ds.Tables["Models"].AsEnumerable()
select new Model
{
// rest of members omitted to simplify
YearBegin = m.Field<int?>("YearBegin"),
YearEnd = m.Field<int?>("YearEnd"),
};
}
That code will encounter the InvalidCastException. However, when I flip the types on the DataTable to nullable Int32, then the nearly identical code works properly:
using (TestData ds = new TestData())
{
// Typed Rows
ds.Models.AddModelsRow(1, 2);
ds.Models.AddModelsRow(ds.Models.NewModelsRow()); // NULL INFO TEST
// Untyped rows
DataRow r = ds.Models.NewRow();
r[0] = 4;
r[1] = 5;
ds.Models.Rows.Add(r);
//query
var list = from m in ds.Tables["Models"].AsEnumerable()
select new Model
{
// rest of members omitted to simplify
YearBegin = m.Field<int?>("YearBegin"),
YearEnd = m.Field<int?>("YearEnd"),
};
}
Take a look at your DataTable. You can correct your issue there. The Field cast to int? will not work unless your DataTable field matches the int? type.
Problem solved, I am working against a legacy access database and the data type was stored as Integer instead on Long Integer meaning it is represented as an Int16 in the data set hence the Invalid cast exception...

Categories