Is there a way to pass child or multiple objects to Dapper as query parameter, when they have properties with the same name?
For example, if I have these classes:
class Person
{
public int Id { get; set; }
public string Name { get; set; }
public City City { get; set; }
}
class City
{
public int Id { get; set; }
public string Name { get; set; }
}
and I want to execute this query:
connect.QueryFirst<Result>("select :Id Id, :Name Name, :City_Id City_Id, :City_Name City_Name from dual", personParams);
The only way I managed to do it without reflection was passing the first object and then adding the other properties one by one:
var personParams = new DynamicParameters(person);
personParams.AddDynamicParams(new { City_Id = person.City.Id, City_name = person.City.Name });
But on the database there are hundreds of tables, some of them with more than a hundred columns, and I need to split them into multiple classes. So passing the parameters one by one would be improductive.
I tried adding the parameters to a temporary bag and then adding a prefix to each one, something like this:
static DynamicParameters GetParameters(object mainEntity, object otherEntities)
{
var allParams = new DynamicParameters(mainEntity);
foreach (PropertyDescriptor descriptor in TypeDescriptor.GetProperties(otherEntities))
{
object obj = descriptor.GetValue(otherEntities);
var parameters = new DynamicParameters(obj);
foreach (var paramName in parameters.ParameterNames)
{
var value = parameters.Get<object>(paramName);
allParams.Add($"{descriptor.Name}{paramName}", value);
}
}
return allParams;
}
But it doesn't work because Dapper only populates the parameters when the command is executing, not when the DynamicParamters is created.
I wanted to avoid reflection because Dapper is very good at it, and my code would probably perform much worse. Is there a way to do it or is reflection my only option?
Related
I have this C# working code now.
using (var bulk = new SqlBulkCopy(_connectionString))
{
bulk.DestinationTableName = "Person";
bulk.WriteToServer(DataTableHelper.ToDataTable(persons));
}
The problem is that the above code only applies to one table and one list object. But I have 3 more different tables and different lists that use the same code above.
Example
TableName
List Object
Person
persons
Address
address
Contact
contact
Instead of duplicating the code 3 times, I am creating a method which takes a parameter.
How can I achieve that using a Tuple, a dictionary, or a generic, since there are only two values and map is needed?
I tried something like below but the syntax is invalid.
var dict = new Dictionary<string, List<T>>();
dict.Add("Person", persons);
dict.Add("Address", address);
dict.Add("Contact", contact);
Assuming DataTableHelper.ToDataTable() can handle a list of Person/Address/Contact, one way would be to create a generic method that takes the list of objects and the name of the table:
void WriteToServer<T>(List<T> objects, string tableName)
{
using (var bulk = new SqlBulkCopy(_connectionString))
{
bulk.DestinationTableName = tableName;
bulk.WriteToServer(DataTableHelper.ToDataTable(objects));
}
}
Then, you can use it like this:
WriteToServer(persons, "Person");
WriteToServer(addresses, "Address");
WriteToServer(contacts, "Contact");
And if Person, Address, and Contact share the same base type (base class or an interface), then you should add a constraint to the method:
void WriteToServer<T>(List<T> objects, string tableName) where T : ISomething
I will illustrate the method of use with an example.The example is very simple
public class Person
{
public int ID { get; set; }
public string PersonName { get; set; }
}
public class Address
{
public string PersonAddress { get; set; }
}
public class StoreDataBase<T>
{
public static void ToDataTable(T item)
{
using (TestDBContext db = new TestDBContext())
{
db.GetTable<T>().InsertOnSubmit(item);
db.SubmitChanges();
}
}
}
//for use
Person person = new Person() { ID = 1, PersonName = "Meysam" };
StoreDataBase<Person>.ToDataTable(person);
Address address = new Address() { PersonAddress = "Asia" };
StoreDataBase<Address>.ToDataTable(address);
I have 2 parts on an API that share some similarities but function differently. I am currently trying to take data from a list object of People from class B and add this data to a list of People created from Class A(hopefully explained well enough?)
The People structure in the 2 classes are actually the same:
[XmlRoot(ElementName = "people")]
public class People
{
[XmlElement(ElementName = "member")]
public List<Member> Member { get; set; }
}
[XmlRoot(ElementName = "member")]
public class Member
{
[XmlElement(ElementName = "firstName")]
public string FirstName { get; set; }
[XmlElement(ElementName = "lastName")]
public string LastName { get; set; }
[XmlAttribute(AttributeName = "memberId")]
public string MemberId { get; set; }
[XmlAttribute(AttributeName = "memberNotes")]
public string Notes { get; set; }
[XmlElement(ElementName = "departed")]
public string Departed { get; set; }
[XmlElement(ElementName = "currentPosition")]
public Name CurrentPosition { get; set; }
}
In normal operation the following code sets the People list just fine:
public People PersonData { get; set; }
...
....
var results = ApiA.People;
PersonData = results.Member; //during normal operation only one entry of member is returned
However in another operation the results returns a Larger list of member objects so I am trying to add to the same list to ensure handling later on uses single method for both operations due to the same data structure, what I am trying is as follows:
if(PersonData == null)
PersonData = new API_A.People();
var results = ApiB.People; //person data here belongs to API_B.Person
foreach (var res in results)
{
if (res?.Member != null)
{
if (PersonData == null)
{
PersonData.Member.AddRange(res.People.Member.Cast<API_A.Member>());
break;
}
else
PersonData.Member.Union(res.People.Member.Cast<API_A.Member>());
}
}
No errors are returned in the ide but during operation I continually receive a NullReferenceException during the add range operation; so as I am still learning I would really appreciate understanding what I am doing wrong here?
2 problems are obvious.
If PersonData is null, you cannot access to PersonData.Member before creating the PersonData object first. So in your case it should be:
PersonData = new People();
Next problem you'll have is the casting. Even if everything is same in 2 different classes, unless there is an inheritance relation between those, you cannot cast one to another. What you should do is to map your one class to the other. Just create a mapper method somewhere else that maps your API_A.Member to API_B.Member and/or vica versa. This kind of mapping workarounds are widely used, feel safe creating this heavy looking mapping method.
Example:
API_A.Member MapBToA(API_B.Member member)
{
return new API_A.Member {
CharacterName = member.CharacterName,
...
};
}
The scenario is the following: I receive a message containing a lot of variables, several hundreds. I need to write this to Azure Table storage where the partition key is the name of the individual variables and the value gets mapped to e.g. Value.
Let’s say the payload looks like the following:
public class Payload
{
public long DeviceId { get; set; }
public string Name { get; set; }
public double Foo { get; set; }
public double Rpm { get; set; }
public double Temp { get; set; }
public string Status { get; set; }
public DateTime Timestamp { get; set; }
}
And my TableEntry like this:
public class Table : TableEntity
{
public Table(string partitionKey, string rowKey)
{
this.PartitionKey = partitionKey;
this.RowKey = rowKey;
}
public Table() {}
public long DeviceId { get; set; }
public string Name { get; set; }
public double Value { get; set; }
public string Signal { get; set; }
public string Status { get; set; }
}
In order to write that to Table storage, I need to
var table = new Table(primaryKey, payload.Timestamp.ToString(TimestampFormat))
{
DeviceId = payload.DeviceId,
Name = payload.Name,
Status = payload.Status,
Value = value (payload.Foo or payload.Rpm or payload.Temp),
Signal = primarykey/Name of variable ("foo" or "rmp" or "temp"),
Timestamp = payload.Timestamp
};
var insertOperation = TableOperation.Insert(table);
await this.cloudTable.ExecuteAsync(insertOperation);
I don’t want to copy this 900 times (or how many variables there happen to be in the payload message; this is a fixed number).
I could make a method to create the table, but I will still have to call this 900 times.
I thought maybe AutoMapper could help out.
Are they always the same variables? A different approach could be to use DynamicTableEntity in which you basically have a TableEntity where you can fill out all additional fields after the RowKey/PartitionKey Duo:
var tableEntity = new DynamicTableEntity();
tableEntity.PartitionKey = "partitionkey";
tableEntity.RowKey = "rowkey";
dynamic json = JsonConvert.DeserializeObject("{bunch:'of',stuff:'here'}");
foreach(var item in json)
{
tableEntity.Properties.Add(item.displayName, item.value);
}
// Save etc
The problem is to map these properties, it is right?
Value = value (payload.Foo or payload.Rpm or payload.Temp),
Signal = primarykey/Name of variable ("foo" or "rmp" or "temp"),
This conditional mapping can be done via Reflection:
object payload = new A { Id = 1 };
object value = TryGetPropertyValue(payload, "Id", "Name"); //returns 1
payload = new B { Name = "foo" };
value = TryGetPropertyValue(payload, "Id", "Name"); //returns "foo"
.
public object TryGetPropertyValue(object obj, params string[] propertyNames)
{
foreach (var name in propertyNames)
{
PropertyInfo propertyInfo = obj.GetType().GetProperty(name);
if (propertyInfo != null) return propertyInfo.GetValue(obj);
}
throw new ArgumentException();
}
You may map rest of properties (which have equal names in source and destination) with AutoMapper.Mapper.DynamicMap call instead of AutoMapper.Mapper.Map to avoid creation of hundreds configuration maps. Or just cast your payload to dynamic and map it manually.
You can create a DynamicTableEntity from your Payload objects with 1-2 lines of code using TableEntity.Flatten method in the SDK or use the ObjectFlattenerRecomposer Nuget package if you are also worried about ICollection type properties. Assign it PK/RK and write the flattened Payload object into the table as a DynamicTableEntity. When you read it back, read it as DynamicTableEntity and you can use TableEntity.ConvertBack method to recreate the original object. Dont even need that intermediate Table class.
I want to create a list based off the query that gets passed into my Method. My issue is determining how to add those items to a list that I return as a result list. The following is the code that includes my list and what will hopefully be the way I populate that list...
public void QueryInto<T>(ref List<T> returnType, string queryString, string[] parameters = null)
{
try
{
// Setup the query.
SetupCommand(queryString);
// Setup the parameters.
AddParameters(parameters);
// Execute the reader.
OpenReader();
// Make sure the statement returns rows...
if (reader.HasRows)
{
// For each row, do this...
while (reader.Read())
{
// TODO: use datamapping to map to model and add items to list...
}
}
}
Perhaps there is a better way of doing this, but so far this is the direction Google Has directed me!
you can use Dapper for this.
example useage;
public class Dog
{
public int? Age { get; set; }
public Guid Id { get; set; }
public string Name { get; set; }
public float? Weight { get; set; }
}
var guid = Guid.NewGuid();
var dog = connection.Query<Dog>("select Age = #Age, Id = #Id", new { Age = (int?)null, Id = guid });
My system has this pattern DAO->Objects->facade->View
so i have a DAO to query database, and instantiate objects, this objects has only attributes (just a container / entity), i want to use LINQ in DAO part, but i dont realize how to pass my objects becouse LINQ generate 1-per table.
namespace ykpObjects.Objects
{
public class Customer
{
public string name { get; set; }
public Cidade()
{
cidadeID = 0;
}
}
}
namespace ykpData.Components.MSSQL
{
public class CustomerDC : DataComponentCM, ICustomerDC
{
Customer ICustomerDC.RecuperaPorID(int CustomerID)
{
Customer Customer = new Customer();
using (MDDataContext omd = new MDDataContext(base.PreencherConexao()))
{
sp_mkp_Customer_SelectByIDResult result = omd.sp_mkp_Customer_SelectByID(CustomerID).SingleOrDefault();
if (result == null)
return null;
Customer.name = result.name;
return Customer;
}
}
}
}
I use DAO to call sprocs, so i get sproc results and instanciate a object of Customer for exemple, and pass this to control, now i want to change to linq but i dont wanna change all object structure to minimalize the impact.
Any advice ?
I'm not exactly sure what you are talking about with your current setup, but I think you're asking how to re-use the objects you currently have with Linq to SQL, rather than generating new ones from a dbml file. Am I right?
If so, you have a few options. You can use attributes to decorate your existing objects so that you can populate them with L2S, or you can create mapping files.
Some info here: http://www.sidarok.com/web/blog/content/2008/10/14/achieving-poco-s-in-linq-to-sql.html
I use Linq to SQL with attributes to achieve a "code first" solution, here's an example class:
[Table(Name = "Countries")]
public class Country
{
[Column(IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)]
public int CountryId { get; set; }
[Column]
public string iso2 { get; set; }
[Column]
public string iso3 { get; set; }
[Column]
public string name_en { get; set; }
}
To work with this object:
var context = new DataContext(ConnectionString);
var data = context.GetTable<Country>().Where(c => c.CountryId == 1);