I'm comparing materialize time between Dapper and ADO.NET and Dapper. Ultimately, Dapper tend to faster than ADO.NET, though the first time a given fetch query was executed is slower than ADO.NET. a few result show that Dapper a little bit faster than ADO.NET(almost all of result show that it comparable though)
So I think I'm using inefficient approach to map result of SqlDataReader to object.
This is my code
var sql = "SELECT * FROM Sales.SalesOrderHeader WHERE SalesOrderID = #Id";
var conn = new SqlConnection(ConnectionString);
var stopWatch = new Stopwatch();
try
{
conn.Open();
var sqlCmd = new SqlCommand(sql, conn);
for (var i = 0; i < keys.GetLength(0); i++)
{
for (var r = 0; r < keys.GetLength(1); r++)
{
stopWatch.Restart();
sqlCmd.Parameters.Clear();
sqlCmd.Parameters.AddWithValue("#Id", keys[i, r]);
var reader = await sqlCmd.ExecuteReaderAsync();
SalesOrderHeaderSQLserver salesOrderHeader = null;
while (await reader.ReadAsync())
{
salesOrderHeader = new SalesOrderHeaderSQLserver();
salesOrderHeader.SalesOrderId = (int)reader["SalesOrderId"];
salesOrderHeader.SalesOrderNumber = reader["SalesOrderNumber"] as string;
salesOrderHeader.AccountNumber = reader["AccountNumber"] as string;
salesOrderHeader.BillToAddressID = (int)reader["BillToAddressID"];
salesOrderHeader.TotalDue = (decimal)reader["TotalDue"];
salesOrderHeader.Comment = reader["Comment"] as string;
salesOrderHeader.DueDate = (DateTime)reader["DueDate"];
salesOrderHeader.CurrencyRateID = reader["CurrencyRateID"] as int?;
salesOrderHeader.CustomerID = (int)reader["CustomerID"];
salesOrderHeader.SalesPersonID = reader["SalesPersonID"] as int?;
salesOrderHeader.CreditCardApprovalCode = reader["CreditCardApprovalCode"] as string;
salesOrderHeader.ShipDate = reader["ShipDate"] as DateTime?;
salesOrderHeader.Freight = (decimal)reader["Freight"];
salesOrderHeader.ModifiedDate = (DateTime)reader["ModifiedDate"];
salesOrderHeader.OrderDate = (DateTime)reader["OrderDate"];
salesOrderHeader.TerritoryID = reader["TerritoryID"] as int?;
salesOrderHeader.CreditCardID = reader["CreditCardID"] as int?;
salesOrderHeader.OnlineOrderFlag = (bool)reader["OnlineOrderFlag"];
salesOrderHeader.PurchaseOrderNumber = reader["PurchaseOrderNumber"] as string;
salesOrderHeader.RevisionNumber = (byte)reader["RevisionNumber"];
salesOrderHeader.Rowguid = (Guid)reader["Rowguid"];
salesOrderHeader.ShipMethodID = (int)reader["ShipMethodID"];
salesOrderHeader.ShipToAddressID = (int)reader["ShipToAddressID"];
salesOrderHeader.Status = (byte)reader["Status"];
salesOrderHeader.SubTotal = (decimal)reader["SubTotal"];
salesOrderHeader.TaxAmt = (decimal)reader["TaxAmt"];
}
stopWatch.Stop();
reader.Close();
await PrintTestFindByPKReport(stopWatch.ElapsedMilliseconds, salesOrderHeader.SalesOrderId.ToString());
}
I used as keyword to cast in nullable column, is that correct?
and this is code for Dapper.
using (var conn = new SqlConnection(ConnectionString))
{
conn.Open();
var stopWatch = new Stopwatch();
for (var i = 0; i < keys.GetLength(0); i++)
{
for (var r = 0; r < keys.GetLength(1); r++)
{
stopWatch.Restart();
var result = (await conn.QueryAsync<SalesOrderHeader>("SELECT * FROM Sales.SalesOrderHeader WHERE SalesOrderID = #Id", new { Id = keys[i, r] })).FirstOrDefault();
stopWatch.Stop();
await PrintTestFindByPKReport(stopWatch.ElapsedMilliseconds, result.ToString());
}
}
}
When in doubt regarding anything db or reflection, I ask myself, "what would Marc Gravell do?".
In this case, he would use FastMember! And you should too. It's the underpinning to the data conversions in Dapper, and can easily be used to map your own DataReader to an object (should you not want to use Dapper).
Below is an extension method converting a SqlDataReader into something of type T:
PLEASE NOTE: This code implies a dependency on FastMember and is written for .NET Core (though could easily be converted to .NET Framework/Standard compliant code).
public static T ConvertToObject<T>(this SqlDataReader rd) where T : class, new()
{
Type type = typeof(T);
var accessor = TypeAccessor.Create(type);
var members = accessor.GetMembers();
var t = new T();
for (int i = 0; i < rd.FieldCount; i++)
{
if (!rd.IsDBNull(i))
{
string fieldName = rd.GetName(i);
if (members.Any(m => string.Equals(m.Name, fieldName, StringComparison.OrdinalIgnoreCase)))
{
accessor[t, fieldName] = rd.GetValue(i);
}
}
}
return t;
}
2022 update
Now that we have .NET 5 and .NET 6 available, which include Source Generators - an amazing Roslyn-based feature, that, basically, allows your code to... generate more code at compile time. It's basically "AOT Reflection" (ahead-of-time) that allows you to generate lightning-fast mapping code that has zero overhead. This thing will revolutionize the ORM world for sure.
Now, back to the question - the fastest way to map an IDataReader would be to use Source Generators. We started experimenting with this feature and we love it.
Here's a library we're working on, that does exactly that (maps DataReader to objects), feel free to "steal" some code examples: https://github.com/jitbit/MapDataReader
Previous answer that is still 100% valid
The most upvoted answer mentions #MarkGravel and his FastMember. But if you're already using Dapper, which is also a component of his, you can use Dapper's GetRowParser like this:
var parser = reader.GetRowParser<MyObject>(typeof(MyObject));
while (reader.Read())
{
var myObject = parser(reader);
}
Here's a way to make your ADO.NET code faster.
When you do your select, list out the fields that you are selecting rather than using select *. This will let you ensure the order that the fields are coming back even if that order changes in the database.Then when getting those fields from the Reader, get them by index rather than by name. Using and index is faster.
Also, I'd recommend not making string database fields nullable unless there is a strong business reason. Then just store a blank string in the database if there is no value. Finally I'd recommend using the Get methods on the DataReader to get your fields in the type they are so that casting isn't needed in your code. So for example instead of casting the DataReader[index++] value as an int use DataReader.GetInt(index++)
So for example, this code:
salesOrderHeader = new SalesOrderHeaderSQLserver();
salesOrderHeader.SalesOrderId = (int)reader["SalesOrderId"];
salesOrderHeader.SalesOrderNumber = reader["SalesOrderNumber"] as string;
salesOrderHeader.AccountNumber = reader["AccountNumber"] as string;
becomes
int index = 0;
salesOrderHeader = new SalesOrderHeaderSQLserver();
salesOrderHeader.SalesOrderId = reader.GetInt(index++);
salesOrderHeader.SalesOrderNumber = reader.GetString(index++);
salesOrderHeader.AccountNumber = reader.GetString(index++);
Give that a whirl and see how it does for you.
Modified #HouseCat's solution to be case insensitive:
/// <summary>
/// Maps a SqlDataReader record to an object. Ignoring case.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="dataReader"></param>
/// <param name="newObject"></param>
/// <remarks>https://stackoverflow.com/a/52918088</remarks>
public static void MapDataToObject<T>(this SqlDataReader dataReader, T newObject)
{
if (newObject == null) throw new ArgumentNullException(nameof(newObject));
// Fast Member Usage
var objectMemberAccessor = TypeAccessor.Create(newObject.GetType());
var propertiesHashSet =
objectMemberAccessor
.GetMembers()
.Select(mp => mp.Name)
.ToHashSet(StringComparer.InvariantCultureIgnoreCase);
for (int i = 0; i < dataReader.FieldCount; i++)
{
var name = propertiesHashSet.FirstOrDefault(a => a.Equals(dataReader.GetName(i), StringComparison.InvariantCultureIgnoreCase));
if (!String.IsNullOrEmpty(name))
{
objectMemberAccessor[newObject, name]
= dataReader.IsDBNull(i) ? null : dataReader.GetValue(i);
}
}
}
EDIT: This does not work for List<T> or multiple tables in the results.
EDIT2: Changing the calling function to this works for lists. I am just going to return a list of objects no matter what and get the first index if I was expecting a single object. I haven't looked into multiple tables yet but I will.
public static void MapDataToObject<T>(this SqlDataReader dataReader, T newObject)
{
if (newObject == null) throw new ArgumentNullException(nameof(newObject));
// Fast Member Usage
var objectMemberAccessor = TypeAccessor.Create(newObject.GetType());
var propertiesHashSet =
objectMemberAccessor
.GetMembers()
.Select(mp => mp.Name)
.ToHashSet(StringComparer.InvariantCultureIgnoreCase);
for (int i = 0; i < dataReader.FieldCount; i++)
{
var name = propertiesHashSet.FirstOrDefault(a => a.Equals(dataReader.GetName(i), StringComparison.InvariantCultureIgnoreCase));
if (!String.IsNullOrEmpty(name))
{
//Attention! if you are getting errors here, then double check that your model and sql have matching types for the field name.
//Check api.log for error message!
objectMemberAccessor[newObject, name]
= dataReader.IsDBNull(i) ? null : dataReader.GetValue(i);
}
}
}
EDIT 3: Updated to show sample calling function.
public async Task<List<T>> ExecuteReaderAsync<T>(string storedProcedureName, SqlParameter[] sqlParameters = null) where T : class, new()
{
var newListObject = new List<T>();
using (var conn = new SqlConnection(_connectionString))
{
using (SqlCommand sqlCommand = GetSqlCommand(conn, storedProcedureName, sqlParameters))
{
await conn.OpenAsync();
using (var dataReader = await sqlCommand.ExecuteReaderAsync(CommandBehavior.Default))
{
if (dataReader.HasRows)
{
while (await dataReader.ReadAsync())
{
var newObject = new T();
dataReader.MapDataToObject(newObject);
newListObject.Add(newObject);
}
}
}
}
}
return newListObject;
}
Took the method from pimbrouwers' answer and optimized it slightly. Reduce LINQ calls.
Maps only properties found in both the object and data field names. Handles DBNull. Other assumption made is your domain model properties absolutely equals table column/field names.
/// <summary>
/// Maps a SqlDataReader record to an object.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="dataReader"></param>
/// <param name="newObject"></param>
public static void MapDataToObject<T>(this SqlDataReader dataReader, T newObject)
{
if (newObject == null) throw new ArgumentNullException(nameof(newObject));
// Fast Member Usage
var objectMemberAccessor = TypeAccessor.Create(newObject.GetType());
var propertiesHashSet =
objectMemberAccessor
.GetMembers()
.Select(mp => mp.Name)
.ToHashSet();
for (int i = 0; i < dataReader.FieldCount; i++)
{
if (propertiesHashSet.Contains(dataReader.GetName(i)))
{
objectMemberAccessor[newObject, dataReader.GetName(i)]
= dataReader.IsDBNull(i) ? null : dataReader.GetValue(i);
}
}
}
Sample Usage:
public async Task<T> GetAsync<T>(string storedProcedureName, SqlParameter[] sqlParameters = null) where T : class, new()
{
using (var conn = new SqlConnection(_connString))
{
var sqlCommand = await GetSqlCommandAsync(storedProcedureName, conn, sqlParameters);
var dataReader = await sqlCommand.ExecuteReaderAsync(CommandBehavior.CloseConnection);
if (dataReader.HasRows)
{
var newObject = new T();
if (await dataReader.ReadAsync())
{ dataReader.MapDataToObject(newObject); }
return newObject;
}
else
{ return null; }
}
}
You can install the package DbDataReaderMapper with the command Install-Package DbDataReaderMapper or using your IDE's package manager.
You can then create your data access object (I will choose a shorter example than the one you provided):
class EmployeeDao
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public int? Age { get; set; }
}
To do the automatic mapping you can call the extension method MapToObject<T>()
var reader = await sqlCmd.ExecuteReaderAsync();
while (await reader.ReadAsync())
{
var employeeObj = reader.MapToObject<EmployeeDao>();
}
and you will get rid of tens of lines of unreadable and hardly-maintainable code.
Step-by-step example here: https://github.com/LucaMozzo/DbDataReaderMapper
Perhaps the approach I will present isn't the most efficient but gets the job done with very little coding effort. The main benefit I see here is that you don't have to deal with data structure other than building a compatible (mapable) object.
If you convert the SqlDataReader to DataTable then serialize it using JsonConvert.SerializeObject you can then deserialize it to a known object type using JsonConvert.DeserializeObject
Here is an example of implementation:
SqlDataReader reader = null;
SqlConnection myConnection = new SqlConnection();
myConnection.ConnectionString = ConfigurationManager.ConnectionStrings["DatabaseConnection"].ConnectionString;
SqlCommand sqlCmd = new SqlCommand();
sqlCmd.CommandType = CommandType.Text;
sqlCmd.CommandText = "SELECT * FROM MyTable";
sqlCmd.Connection = myConnection;
myConnection.Open();
reader = sqlCmd.ExecuteReader();
var dataTable = new DataTable();
dataTable.Load(reader);
List<MyObject> myObjects = new List<MyObject>();
if (dataTable.Rows.Count > 0)
{
var serializedMyObjects = JsonConvert.SerializeObject(dataTable);
// Here you get the object
myObjects = (List<MyObject>)JsonConvert.DeserializeObject(serializedMyObjects, typeof(List<MyObject>));
}
myConnection.Close();
List<T> result = new List<T>();
SqlDataReader reader = com.ExecuteReader();
while(reader.Read())
{
Type type = typeof(T);
T obj = (T)Activator.CreateInstance(type);
PropertyInfo[] properties = type.GetProperties();
foreach (PropertyInfo property in properties)
{
try
{
var value = reader[property.Name];
if (value != null)
property.SetValue(obj, Convert.ChangeType(value.ToString(), property.PropertyType));
}
catch{}
}
result.Add(obj);
}
There is a SqlDataReader Mapper library in NuGet which helps you to map SqlDataReader to an object. Here is how it can be used (from GitHub documentation):
var mappedObject = new SqlDataReaderMapper<DTOObject>(reader)
.Build();
Or, if you want a more advanced mapping:
var mappedObject = new SqlDataReaderMapper<DTOObject>(reader)
.NameTransformers("_", "")
.ForMember<int>("CurrencyId")
.ForMember("CurrencyCode", "Code")
.ForMember<string>("CreatedByUser", "User").Trim()
.ForMemberManual("CountryCode", val => val.ToString().Substring(0, 10))
.ForMemberManual("ZipCode", val => val.ToString().Substring(0, 5), "ZIP")
.Build();
Advanced mapping allows you to use name transformers, change types, map fields manually or even apply functions to the object's data so that you can easily map objects even if they differ with a reader.
I took both pimbrouwers and HouseCat's answers and come up with me. In my scenario, the column name in database has snake case format.
public static T ConvertToObject<T>(string query) where T : class, new()
{
using (var conn = new SqlConnection(AutoConfig.ConnectionString))
{
conn.Open();
var cmd = new SqlCommand(query) {Connection = conn};
var rd = cmd.ExecuteReader();
var mappedObject = new T();
if (!rd.HasRows) return mappedObject;
var accessor = TypeAccessor.Create(typeof(T));
var members = accessor.GetMembers();
if (!rd.Read()) return mappedObject;
for (var i = 0; i < rd.FieldCount; i++)
{
var columnNameFromDataTable = rd.GetName(i);
var columnValueFromDataTable = rd.GetValue(i);
var splits = columnNameFromDataTable.Split('_');
var columnName = new StringBuilder("");
foreach (var split in splits)
{
columnName.Append(CultureInfo.InvariantCulture.TextInfo.ToTitleCase(split.ToLower()));
}
var mappedColumnName = members.FirstOrDefault(x =>
string.Equals(x.Name, columnName.ToString(), StringComparison.OrdinalIgnoreCase));
if(mappedColumnName == null) continue;
var columnType = mappedColumnName.Type;
if (columnValueFromDataTable != DBNull.Value)
{
accessor[mappedObject, columnName.ToString()] = Convert.ChangeType(columnValueFromDataTable, columnType);
}
}
return mappedObject;
}
}
We use the following class to execute a SQL query and automatically map the rows to objects. You can easily adjust the class to fit to your needs. Beware that our approach depends on FastMember, but you could easily modify the code to use reflection.
/// <summary>
/// Mapping configuration for a specific sql table to a specific class.
/// </summary>
/// <param name="Accessor">Used to access the target class properties.</param>
/// <param name="PropToRowIdxDict">Target class property name -> database reader row idx dictionary.</param>
internal record RowMapper(TypeAccessor Accessor, IDictionary<string, int> PropToRowIdxDict);
public class RawSqlHelperService
{
/// <summary>
/// Create a new mapper for the conversion of a <see cref="DbDataReader"/> row -> <typeparamref name="T"/>.
/// </summary>
/// <typeparam name="T">Target class to use.</typeparam>
/// <param name="reader">Data reader to obtain column information from.</param>
/// <returns>Row mapper object for <see cref="DbDataReader"/> row -> <typeparamref name="T"/>.</returns>
private RowMapper GetRowMapper<T>(DbDataReader reader) where T : class, new()
{
var accessor = TypeAccessor.Create(typeof(T));
var members = accessor.GetMembers();
// Column name -> column idx dict
var columnIdxDict = Enumerable.Range(0, reader.FieldCount).ToDictionary(idx => reader.GetName(idx), idx => idx);
var propToRowIdxDict = members
.Where(m => m.GetAttribute(typeof(NotMappedAttribute), false) == null)
.Select(m =>
{
var columnAttr = m.GetAttribute(typeof(ColumnAttribute), false) as ColumnAttribute;
var columnName = columnAttr == null
? m.Name
: columnAttr.Name;
return (PropertyName: m.Name, ColumnName: columnName);
})
.ToDictionary(x => x.PropertyName, x => columnIdxDict[x.ColumnName]);
return new RowMapper(accessor, propToRowIdxDict);
}
/// <summary>
/// Read <see cref="DbDataReader"/> current row as object <typeparamref name="T"/>.
/// </summary>
/// <typeparam name="T">The class to map to.</typeparam>
/// <param name="reader">Data reader to read the current row from.</param>
/// <param name="mapper">Mapping configuration to use to perform the mapping operation.</param>
/// <returns>Resulting object of the mapping operation.</returns>
private T ReadRowAsObject<T>(DbDataReader reader, RowMapper mapper) where T : class, new()
{
var (accessor, propToRowIdxDict) = mapper;
var t = new T();
foreach (var (propertyName, columnIdx) in propToRowIdxDict)
accessor[t, propertyName] = reader.GetValue(columnIdx);
return t;
}
/// <summary>
/// Execute the specified <paramref name="sql"/> query and automatically map the resulting rows to <typeparamref name="T"/>.
/// </summary>
/// <typeparam name="T">Target class to map to.</typeparam>
/// <param name="dbContext">Database context to perform the operation on.</param>
/// <param name="sql">SQL query to execute.</param>
/// <param name="parameters">Additional list of parameters to use for the query.</param>
/// <returns>Result of the SQL query mapped to a list of <typeparamref name="T"/>.</returns>
public async Task<IEnumerable<T>> ExecuteSql<T>(DbContext dbContext, string sql, IEnumerable<DbParameter> parameters = null) where T : class, new()
{
var con = dbContext.Database.GetDbConnection();
await con.OpenAsync();
var cmd = con.CreateCommand() as OracleCommand;
cmd.BindByName = true;
cmd.CommandText = sql;
cmd.Parameters.AddRange(parameters?.ToArray() ?? new DbParameter[0]);
var reader = await cmd.ExecuteReaderAsync();
var records = new List<T>();
var mapper = GetRowMapper<T>(reader);
while (await reader.ReadAsync())
{
records.Add(ReadRowAsObject<T>(reader, mapper));
}
await con.CloseAsync();
return records;
}
}
Mapping Attributes Supported
I implemented support for the attributes NotMapped and Column used also by the entity framework.
NotMapped Attribute
Properties decorated with this attribute will be ignored by the mapper.
Column Attribute
With this attribute the column name can be customized. Without this attribute the property name is assumed to be the column name.
Example Class
private class Test
{
[Column("SDAT")]
public DateTime StartDate { get; set; } // Column name = "SDAT"
public DateTime EDAT { get; set; } // Column name = "EDAT"
[NotMapped]
public int IWillBeIgnored { get; set; }
}
Comparision to Reflection
I also compared the approach with FastMember to using plain reflection.
For the comparision I queried two date columns from a table with 1000000 rows, here are the results:
Approach
Duration in seconds
FastMember
~1.6 seconds
Reflection
~2 seconds
Credits to user pim for inspiration.
This is based on the other answers but I used standard reflection to read the properties of the class you want to instantiate and fill it from the dataReader. You could also store the properties using a dictionary persisted b/w reads.
Initialize a dictionary containing the properties from the type with their names as the keys.
var type = typeof(Foo);
var properties = type.GetProperties(BindingFlags.Public | BindingFlags.Instance);
var propertyDictionary = new Dictionary<string,PropertyInfo>();
foreach(var property in properties)
{
if (!property.CanWrite) continue;
propertyDictionary.Add(property.Name, property);
}
The method to set a new instance of the type from the DataReader would be like:
var foo = new Foo();
//retrieve the propertyDictionary for the type
for (var i = 0; i < dataReader.FieldCount; i++)
{
var n = dataReader.GetName(i);
PropertyInfo prop;
if (!propertyDictionary.TryGetValue(n, out prop)) continue;
var val = dataReader.IsDBNull(i) ? null : dataReader.GetValue(i);
prop.SetValue(foo, val, null);
}
return foo;
If you want to write an efficient generic class dealing with multiple types you could store each dictionary in a global dictionary>.
This kinda works
public static object PopulateClass(object o, SQLiteDataReader dr, Type T)
{
Type type = o.GetType();
PropertyInfo[] properties = type.GetProperties();
foreach (PropertyInfo property in properties)
{
T.GetProperty(property.Name).SetValue(o, dr[property.Name],null);
}
return o;
}
Note I'm using SQlite here but the concept is the same. As an example I'm filling a Game object by calling the above like this-
g = PopulateClass(g, dr, typeof(Game)) as Game;
Note you have to have your class match up with datareader 100%, so adjust your query to suit or pass in some sort of list to skip fields. With a SQLDataReader talking to a SQL Server DB you have a pretty good type match between .net and the database. With SQLite you have to declare your ints in your class as Int64s for this to work and watch sending nulls to strings. But the above concept seems to work so it should get you going. I think this is what the Op was after.
Related
Database
I have Stored Procedure name GetAllCustomer
I have Stored Procedure in the EntityFramework 6x and in Project ASP.NET MVC5.
I'm using by calling db.GetAllCustomer.ToList();
But in the EntityFramework Core it's not rendering DbSet and I need to create it manually. It can't use EF 6x.
Here is the image of my works:
Is there a way to call Stored Procedure as simple as EF 6x??
EF Core Power Tools can map stored procedures for you, it is not a built in feature of EF Core.
Sample user code:
using (var db = new NorthwindContext())
{
var procedures = new NorthwindContextProcedures(db);
var orders = await procedures.CustOrderHist("ALFKI");
foreach (var order in orders)
Console.WriteLine($"{order.ProductName}: {order.Total}");
var outOverallCount = new OutputParameter<int?>();
var customers = await procedures.SP_GET_TOP_IDS(10, outOverallCount);
Console.WriteLine($"Db contains {outOverallCount.Value} Customers.");
foreach (var customer in customers)
Console.WriteLine(customer.CustomerId);
}
Read more here: https://github.com/ErikEJ/EFCorePowerTools/wiki/Reverse-Engineering#sql-server-stored-procedures
You can use custom ExecuteQuery in your context for any use including stored procedure.
public List<T> ExecuteQuery<T>(string query) where T : class, new()
{
using (var command = Context.Database.GetDbConnection().CreateCommand())
{
command.CommandText = query;
command.CommandType = CommandType.Text;
Context.Database.OpenConnection();
List<T> result;
using (var reader = command.ExecuteReader())
{
result = new List<T>();
var columns = new T().GetType().GetProperties().ToList();
while (reader.Read())
{
var obj = new T();
for (var i = 0; i < reader.FieldCount; i++)
{
var name = reader.GetName(i);
var prop = columns.FirstOrDefault(a => a.Name.ToLower().Equals(name.ToLower()));
if (prop == null)
continue;
var val = reader.IsDBNull(i) ? null : reader[i];
prop.SetValue(obj, val, null);
}
result.Add(obj);
}
return result;
}
}
}
Usage:
db.ExecuteQuery<YOUR_MODEL_DEPEND_ON_RETURN_RESULT>("SELECT FIELDS FROM YOUR_TABLE_NAME")
db.ExecuteQuery<YOUR_MODEL_DEPEND_ON_RETURN_RESULT>("EXEC YOUR_SP_NAME")
db.ExecuteQuery<YOUR_MODEL_DEPEND_ON_RETURN_RESULT>("EXEC YOUR_SP_NAME #Id = 10")
To reduce errors, create query string easier and do faster, I use several other methods, and I put stored procedures names in a static class.
For example, I have something like this to get customer list:
/// <param name="parameters">The model contains all SP parameters</param>
public List<customerGetDto> Get(CustomerSpGetParameters parameters = null)
{
//StoredProcedures.Customer.Get Is "sp_GetAllCustomers"
//CreateSqlQueryForSp creates a string with stored procedure name and parameters
var query = _publicMethods.CreateSqlQueryForSp(StoredProcedures.Request.Get, parameters);
//For example, query= "Exec sp_GetAllCustomers #active = 1,#level = 3,...."
return _unitOfWork.ExecuteQuery<RequestGetDto>(query);
}
When using the C# code below to construct a DB2 SQL query the result set only has one row. If I manually construct the "IN" predicate inside the cmdTxt string using string.Join(",", ids) then all of the expected rows are returned. How can I return all of the expected rows using the db2Parameter object instead of building the query as a long string to be sent to the server?
public object[] GetResults(int[] ids)
{
var cmdTxt = "SELECT DISTINCT ID,COL2,COL3 FROM TABLE WHERE ID IN ( #ids ) ";
var db2Command = _DB2Connection.CreateCommand();
db2Command.CommandText = cmdTxt;
var db2Parameter = db2Command.CreateParameter();
db2Parameter.ArrayLength = ids.Length;
db2Parameter.DB2Type = DB2Type.DynArray;
db2Parameter.ParameterName = "#ids";
db2Parameter.Value = ids;
db2Command.Parameters.Add(db2Parameter);
var results = ExecuteQuery(db2Command);
return results.ToArray();
}
private object[] ExecuteQuery(DB2Command db2Command)
{
_DB2Connection.Open();
var resultList = new ArrayList();
var results = db2Command.ExecuteReader();
while (results.Read())
{
var values = new object[results.FieldCount];
results.GetValues(values);
resultList.Add(values);
}
results.Close();
_DB2Connection.Close();
return resultList.ToArray();
}
You cannot send in an array as a parameter. You would have to do something to build out a list of parameters, one for each of your values.
e.g.: SELECT DISTINCT ID,COL2,COL3 FROM TABLE WHERE ID IN ( #id1, #id2, ... #idN )
And then add the values to your parameter collection:
cmd.Parameters.Add("#id1", DB2Type.Integer).Value = your_val;
Additionally, there are a few things I would do to improve your code:
Use using statements around your DB2 objects. This will automatically dispose of the objects correctly when they go out of scope. If you don't do this, eventually you will run into errors. This should be done on DB2Connection, DB2Command, DB2Transaction, and DB2Reader objects especially.
I would recommend that you wrap queries in a transaction object, even for selects. With DB2 (and my experience is with z/OS mainframe, here... it might be different for AS/400), it writes one "accounting" record (basically the work that DB2 did) for each transaction. If you don't have an explicit transaction, DB2 will create one for you, and automatically commit after every statement, which adds up to a lot of backend records that could be combined.
My personal opinion would also be to create a .NET class to hold the data that you are getting back from the database. That would make it easier to work with using IntelliSense, among other things (because you would be able to auto-complete the property name, and .NET would know the type of the object). Right now, with the array of objects, if your column order or data type changes, it may be difficult to find/debug those usages throughout your code.
I've included a version of your code that I re-wrote that has some of these changes in it:
public List<ReturnClass> GetResults(int[] ids)
{
using (var conn = new DB2Connection())
{
conn.Open();
using (var trans = conn.BeginTransaction(IsolationLevel.ReadCommitted))
using (var cmd = conn.CreateCommand())
{
cmd.Transaction = trans;
var parms = new List<string>();
var idCount = 0;
foreach (var id in ids)
{
var parm = "#id" + idCount++;
parms.Add(parm);
cmd.Parameters.Add(parm, DB2Type.Integer).Value = id;
}
cmd.CommandText = "SELECT DISTINCT ID,COL2,COL3 FROM TABLE WHERE ID IN ( " + string.Join(",", parms) + " ) ";
var resultList = new List<ReturnClass>();
using (var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
var values = new ReturnClass();
values.Id = (int)reader["ID"];
values.Col1 = reader["COL1"].ToString();
values.Col2 = reader["COL2"].ToString();
resultList.Add(values);
}
}
return resultList;
}
}
}
public class ReturnClass
{
public int Id;
public string Col1;
public string Col2;
}
Try changing from:
db2Parameter.DB2Type = DB2Type.DynArray;
to:
db2Parameter.DB2Type = DB2Type.Integer;
This is based on the example given here
I am a C# developer new to Neo4j. I have spent a couple of weeks learning the graph concept and trying out Cypher. It is so far a good experience!
I now tried to work with some real data from code through the official C# driver. I was surprised to find that the driver is merely a wrapper of the APIs with no real .Net functionality on top.
When creating nodes I managed fine to create Cypher statements with this pattern:
CREATE (:Movie {tmdbId: {tmdbId}, imdbId: {imdbId}, title: {title}, originalTitle: {originalTitle}, collectionInfo: {collectionInfo}, genres: {genres}, releaseDate: {releaseDate}, plot: {plot}, tagline: {tagline}, originalLanguage: {originalLanguage}, tmdbPopularity: {tmdbPopularity}, tmdbVoteAverage: {tmdbVoteAverage}, tmdbVoteCount: {tmdbVoteCount}, budget: {budget}} )
Parameter collection is automatically generated from object. It works fine. But when creating relations I get an unexpected error.
This is the statement I use to create relations. Source and target nodes are looked up by Id.
MATCH (s:Person), (t:Movie) WHERE s.personId=35742 AND t.movieId=19404 CREATE (s)-[r:ACTED_IN {order: {order}, character: {character}}]->(t) RETURN r
The error I receive is:
System.Reflection.TargetParameterCountException: 'Parameter count mismatch.'
The parameter collection is created the same way as last time. It holds two properties named "order" and "character" as expected.
Is there some error in the statement that I am missing?
/// <summary>
/// Add object as node in Neo4j database.
/// All public properties will automatically be added as properties of the node.
/// </summary>
/// <param name="obj">Generic POCO object</param>
/// <param name="label">Specify type name to be uses. Skip if you are satisfied with object type name.</param>
public void AddNode(object obj, string label = null)
{
using (var session = _driver.Session())
{
label = label ?? obj.GetType().Name;
var parameters = GetProperties(obj);
var valuePairs = string.Join(", ", parameters.Select(p => $"{p.Key}: {{{p.Key}}}"));
var statement = $"CREATE (:{label} {{{valuePairs}}} )";
var result = session.Run(statement, parameters);
Debug.WriteLine($"{result.Summary.Counters.NodesCreated} {label} node created with {result.Summary.Counters.PropertiesSet} properties");
}
}
public void AddRelation(string sourceNodeName, string sourceIdName, string targetNodeName, string targetIdName, string relationName, object relation, string relationSourceIdName, string relationPropertyIdName)
{
using (var session = _driver.Session())
{
//MATCH(s:Person), (t:Person)
//WHERE s.name = 'Source Node' AND t.name = 'Target Node'
//CREATE(s) -[r:RELTYPE]->(t)
//RETURN r
var parameters = GetProperties(relation);
var sourceId = parameters[relationSourceIdName];
var targetId = parameters[relationPropertyIdName];
var properties = parameters.Where(p => p.Key != relationSourceIdName && p.Key != relationPropertyIdName).ToList();
var valuePairs = string.Join(", ", properties.Select(p => $"{p.Key}: {{{p.Key}}}"));
var statement = $"MATCH (s:{sourceNodeName}), (t:{targetNodeName}) WHERE s.{sourceIdName}={sourceId} AND t.{targetIdName}={targetId} CREATE (s)-[r:{relationName} {{{valuePairs}}}]->(t) RETURN r";
var result = session.Run(statement, properties);
Debug.WriteLine($"{result.Summary.Counters.RelationshipsCreated} {relationName} relations created with {result.Summary.Counters.PropertiesSet} properties");
}
}
private static Dictionary<string, object> GetProperties(object obj)
{
var dictionary = new Dictionary<string, object>();
foreach (var property in obj.GetType().GetProperties())
{
var propertyName = property.Name;
var value = property.GetValue(obj);
var array = value as string[];
if (array != null)
{
value = string.Join(",", array);
}
if (value is DateTime)
{
var dateTime = (DateTime)value;
value = dateTime.ToString("yyyy-MM-dd");
}
dictionary.Add(propertyName.ToCamelCase(), value);
}
return dictionary;
}
Bruno set me in the right direction! It was not a problem with neither Cypher nor the wrapper. My mistake was to filter the parameters collection and return a property list instead of keeping it as a dictionary.
The adjusted working method now looks like this:
public void AddRelation(string fromNodeName, string fromIdName, string toNodeName, string toIdName, string relationName, object relation, string relationFromIdName, string relationToIdName)
{
using (var session = _driver.Session())
{
//MATCH(s:Person), (t:Person)
//WHERE s.name = 'Source Node' AND t.name = 'Target Node'
//CREATE(s) -[r:RELATIONTYPE]->(t)
//RETURN r
var parameters = GetProperties(relation);
var fromIdValue = parameters[relationFromIdName];
var toIdValue = parameters[relationToIdName];
var properties = parameters.Where(p => p.Key != relationFromIdName && p.Key != relationToIdName).ToDictionary(p => p.Key, p => p.Value);
var valuePairs = string.Join(", ", properties.Select(p => $"{p.Key}: {{{p.Key}}}"));
var statement = $"MATCH (s:{fromNodeName}), (t:{toNodeName}) WHERE s.{fromIdName}={fromIdValue} AND t.{toIdName}={toIdValue} CREATE (s)-[r:{relationName} {{{valuePairs}}}]->(t) RETURN r";
var result = session.Run(statement, properties);
Console.WriteLine($"{result.Summary.Counters.RelationshipsCreated} {relationName} relations created with {result.Summary.Counters.PropertiesSet} properties");
}
}
I need to store the data returned from this LINQ to Entities query (below) into a DataTable so that I can use it as data source to a DataGridView, how can I do that?
In this case I'm using LINQ to Entities to query against an Entity Framework conceptual model, so db is a class that inherits from System.Data.Entity.DbContext.
using (TccContext db = new TccContext())
{
var query = from vendedor in db.Vendedores.AsEnumerable()
where vendedor.codigo == Convert.ToInt32(textBoxPesquisa.Text)
select vendedor;
// I'd like to do something like DataTable dt = query;
}
I've tried to do this (below), but it throws an exception during execution [1].
using (TccContext db = new TccContext())
{
IEnumerable<DataRow> query = (IEnumerable<DataRow>)(from vendedor in db.Vendedores.AsEnumerable()
where vendedor.codigo == Convert.ToInt32(textBoxPesquisa.Text)
select vendedor);
using (DataTable dt = query.CopyToDataTable<DataRow>())
{
this.dataGridViewProcura.Rows.Add(
dt.Rows[0][0], // Código
dt.Rows[0][1], // Nome
dt.Rows[0][2]); // Venda Mensal
}
}
[1]: Exception: InvalidCastException
Unable to cast object of type 'WhereEnumerableIterator`1[Projeto_TCC.Models.Vendedor]' to type 'System.Collections.Generic.IEnumerable`1[System.Data.DataRow]'.
Thanks in advance
There is one important thing here, you are casting your Linq query to (IEnumerable<DataRow>) when you are selecting the vendedor, so I assume that vendedor is an instance of Vendedor, so your query will return an IEnumerable<Vendedor>
That should solve your problem, but also, can you try using the generated DataTable as the DataSource for your DataGridView? It would be something like this:
var query = (from vendedor in db.Vendedores.AsEnumerable()
where vendedor.codigo == Convert.ToInt32(textBoxPesquisa.Text)
select vendedor);
var dt = query.CopyToDataTable<Vendedor>();
this.dataGridViewProcura.DataSource = dt;
Hope I can help!
EDIT
As a side (and very personal) note, you could try using lambdas on your select, they look prettier :)
var pesquisa = Convert.ToInt32(textBoxPesquisa.Text);
var query = db.Vendedores.Where(vendedor => vendedor.codigo == pesquisa);
var dt = query.CopyToDataTable<Vendedor>();
this.dataGridViewProcura.DataSource = dt;
A lot cleaner, don't you think?
EDIT 2
I've just realized what you said on CopyToDataTable being for DataRow only, so last (admittedly not so clean) solution would be to mimic the logic on the helper?
public DataTable CopyGenericToDataTable<T>(this IEnumerable<T> items)
{
var properties = typeof(T).GetProperties();
var result = new DataTable();
//Build the columns
foreach ( var prop in properties ) {
result.Columns.Add(prop.Name, prop.PropertyType);
}
//Fill the DataTable
foreach( var item in items ){
var row = result.NewRow();
foreach ( var prop in properties ) {
var itemValue = prop.GetValue(item, new object[] {});
row[prop.Name] = itemValue;
}
result.Rows.Add(row);
}
return result;
}
Now, things to consider:
This solution will not work with complex properties
Customizing the resulting table might be a bit tricky
While this might solve the issue, I don't think this is a very good approach, but it could be the start of a decent idea :)
I hope I can help this time!
This is a the MSDN recommended solution: https://msdn.microsoft.com/en-us/library/bb669096(v=vs.110).aspx
I have Implemented it successfully
(*with minor additions to handle nullable DateTime.)
as Follows:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Data;
using System.Reflection;
/// <summary>
/// Converts Entity Type to DataTable
/// </summary>
public class ObjectShredder<T>
{
private System.Reflection.FieldInfo[] _fi;
private System.Reflection.PropertyInfo[] _pi;
private System.Collections.Generic.Dictionary<string, int> _ordinalMap;
private System.Type _type;
// ObjectShredder constructor.
public ObjectShredder()
{
_type = typeof(T);
_fi = _type.GetFields();
_pi = _type.GetProperties();
_ordinalMap = new Dictionary<string, int>();
}
/// <summary>
/// Loads a DataTable from a sequence of objects.
/// </summary>
/// <param name="source">The sequence of objects to load into the DataTable.</param>
/// <param name="table">The input table. The schema of the table must match that
/// the type T. If the table is null, a new table is created with a schema
/// created from the public properties and fields of the type T.</param>
/// <param name="options">Specifies how values from the source sequence will be applied to
/// existing rows in the table.</param>
/// <returns>A DataTable created from the source sequence.</returns>
public DataTable Shred(IEnumerable<T> source, DataTable table, LoadOption? options)
{
// Load the table from the scalar sequence if T is a primitive type.
if (typeof(T).IsPrimitive)
{
return ShredPrimitive(source, table, options);
}
// Create a new table if the input table is null.
if (table == null)
{
table = new DataTable(typeof(T).Name);
}
// Initialize the ordinal map and extend the table schema based on type T.
table = ExtendTable(table, typeof(T));
// Enumerate the source sequence and load the object values into rows.
table.BeginLoadData();
using (IEnumerator<T> e = source.GetEnumerator())
{
while (e.MoveNext())
{
if (options != null)
{
table.LoadDataRow(ShredObject(table, e.Current), (LoadOption)options);
}
else
{
table.LoadDataRow(ShredObject(table, e.Current), true);
}
}
}
table.EndLoadData();
// Return the table.
return table;
}
public DataTable ShredPrimitive(IEnumerable<T> source, DataTable table, LoadOption? options)
{
// Create a new table if the input table is null.
if (table == null)
{
table = new DataTable(typeof(T).Name);
}
if (!table.Columns.Contains("Value"))
{
table.Columns.Add("Value", typeof(T));
}
// Enumerate the source sequence and load the scalar values into rows.
table.BeginLoadData();
using (IEnumerator<T> e = source.GetEnumerator())
{
Object[] values = new object[table.Columns.Count];
while (e.MoveNext())
{
values[table.Columns["Value"].Ordinal] = e.Current;
if (options != null)
{
table.LoadDataRow(values, (LoadOption)options);
}
else
{
table.LoadDataRow(values, true);
}
}
}
table.EndLoadData();
// Return the table.
return table;
}
public object[] ShredObject(DataTable table, T instance)
{
FieldInfo[] fi = _fi;
PropertyInfo[] pi = _pi;
if (instance.GetType() != typeof(T))
{
// If the instance is derived from T, extend the table schema
// and get the properties and fields.
ExtendTable(table, instance.GetType());
fi = instance.GetType().GetFields();
pi = instance.GetType().GetProperties();
}
// Add the property and field values of the instance to an array.
Object[] values = new object[table.Columns.Count];
foreach (FieldInfo f in fi)
{
values[_ordinalMap[f.Name]] = f.GetValue(instance);
}
foreach (PropertyInfo p in pi)
{
values[_ordinalMap[p.Name]] = p.GetValue(instance, null);
}
// Return the property and field values of the instance.
return values;
}
public DataTable ExtendTable(DataTable table, Type type)
{
// Extend the table schema if the input table was null or if the value
// in the sequence is derived from type T.
foreach (FieldInfo f in type.GetFields())
{
if (!_ordinalMap.ContainsKey(f.Name))
{
// Add the field as a column in the table if it doesn't exist
// already.
DataColumn dc = table.Columns.Contains(f.Name) ? table.Columns[f.Name]
: table.Columns.Add(f.Name, f.FieldType);
// Add the field to the ordinal map.
_ordinalMap.Add(f.Name, dc.Ordinal);
}
}
foreach (PropertyInfo p in type.GetProperties())
{
if (!_ordinalMap.ContainsKey(p.Name))
{
// Add the property as a column in the table if it doesn't exist already.
DataColumn dc = table.Columns[p.Name];
//Added Try Catch to account for Nullable Types
try
{
dc = table.Columns.Contains(p.Name) ? table.Columns[p.Name]
: table.Columns.Add(p.Name, p.PropertyType);
}
catch (NotSupportedException nsEx)
{
string pType = p.PropertyType.ToString();
dc = pType.Contains("System.DateTime") ? table.Columns.Add(p.Name, typeof(System.DateTime)) : table.Columns.Add(p.Name);
//dc = table.Columns.Add(p.Name); //Modified to above statment in order to accomodate Nullable date Time
}
// Add the property to the ordinal map.
_ordinalMap.Add(p.Name, dc.Ordinal);
}
}
// Return the table.
return table;
}
}
}
The (big) caveat to this solution is that it is ** costly** and you have to customize it in the error handling.
you can put
var query = from ....
this.dataGridViewProcura.DataSource = query.tolist()
I'm using the Save() method to insert or update records, but I would like to make it perform a bulk insert and bulk update with only one database hit. How do I do this?
In my case, I took advantage of the database.Execute() method.
I created a SQL parameter that had the first part of my insert:
var sql = new Sql("insert into myTable(Name, Age, Gender) values");
for (int i = 0; i < pocos.Count ; ++i)
{
var p = pocos[i];
sql.Append("(#0, #1, #2)", p.Name, p.Age , p.Gender);
if(i != pocos.Count -1)
sql.Append(",");
}
Database.Execute(sql);
I tried two different methods for inserting a large quantity of rows faster than the default Insert (which is pretty slow when you have a lot of rows).
1) Making up a List<T> with the poco's first and then inserting them at once within a loop (and in a transaction):
using (var tr = PetaPocoDb.GetTransaction())
{
foreach (var record in listOfRecords)
{
PetaPocoDb.Insert(record);
}
tr.Complete();
}
2) SqlBulkCopy a DataTable:
var bulkCopy = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.TableLock);
bulkCopy.DestinationTableName = "SomeTable";
bulkCopy.WriteToServer(dt);
To get my List <T> to a DataTable I used Marc Gravells Convert generic List/Enumerable to DataTable? function which worked ootb for me (after I rearranged the Poco properties to be in the exact same order as the table fields in the db.)
The SqlBulkCopy was fastest, 50% or so faster than the transactions method in the (quick) perf tests I did with ~1000 rows.
Hth
Insert in one SQL query is much faster.
Here is a customer method for PetaPoco.Database class that adds ability to do a bulk insert of any collection:
public void BulkInsertRecords<T>(IEnumerable<T> collection)
{
try
{
OpenSharedConnection();
using (var cmd = CreateCommand(_sharedConnection, ""))
{
var pd = Database.PocoData.ForType(typeof(T));
var tableName = EscapeTableName(pd.TableInfo.TableName);
string cols = string.Join(", ", (from c in pd.QueryColumns select tableName + "." + EscapeSqlIdentifier(c)).ToArray());
var pocoValues = new List<string>();
var index = 0;
foreach (var poco in collection)
{
var values = new List<string>();
foreach (var i in pd.Columns)
{
values.Add(string.Format("{0}{1}", _paramPrefix, index++));
AddParam(cmd, i.Value.GetValue(poco), _paramPrefix);
}
pocoValues.Add("(" + string.Join(",", values.ToArray()) + ")");
}
var sql = string.Format("INSERT INTO {0} ({1}) VALUES {2}", tableName, cols, string.Join(", ", pocoValues));
cmd.CommandText = sql;
cmd.ExecuteNonQuery();
}
}
finally
{
CloseSharedConnection();
}
}
Here is the updated verision of Steve Jansen answer that splits in chuncs of maximum 2100 pacos
I commented out the following code as it produces duplicates in the database...
//using (var reader = cmd.ExecuteReader())
//{
// while (reader.Read())
// {
// inserted.Add(reader[0]);
// }
//}
Updated Code
/// <summary>
/// Performs an SQL Insert against a collection of pocos
/// </summary>
/// <param name="pocos">A collection of POCO objects that specifies the column values to be inserted. Assumes that every POCO is of the same type.</param>
/// <returns>An array of the auto allocated primary key of the new record, or null for non-auto-increment tables</returns>
public object BulkInsert(IEnumerable<object> pocos)
{
Sql sql;
IList<PocoColumn> columns = new List<PocoColumn>();
IList<object> parameters;
IList<object> inserted;
PocoData pd;
Type primaryKeyType;
object template;
string commandText;
string tableName;
string primaryKeyName;
bool autoIncrement;
int maxBulkInsert;
if (null == pocos)
{
return new object[] { };
}
template = pocos.First<object>();
if (null == template)
{
return null;
}
pd = PocoData.ForType(template.GetType());
tableName = pd.TableInfo.TableName;
primaryKeyName = pd.TableInfo.PrimaryKey;
autoIncrement = pd.TableInfo.AutoIncrement;
//Calculate the maximum chunk size
maxBulkInsert = 2100 / pd.Columns.Count;
IEnumerable<object> pacosToInsert = pocos.Take(maxBulkInsert);
IEnumerable<object> pacosremaining = pocos.Skip(maxBulkInsert);
try
{
OpenSharedConnection();
try
{
var names = new List<string>();
var values = new List<string>();
var index = 0;
foreach (var i in pd.Columns)
{
// Don't insert result columns
if (i.Value.ResultColumn)
continue;
// Don't insert the primary key (except under oracle where we need bring in the next sequence value)
if (autoIncrement && primaryKeyName != null && string.Compare(i.Key, primaryKeyName, true) == 0)
{
primaryKeyType = i.Value.PropertyInfo.PropertyType;
// Setup auto increment expression
string autoIncExpression = _dbType.GetAutoIncrementExpression(pd.TableInfo);
if (autoIncExpression != null)
{
names.Add(i.Key);
values.Add(autoIncExpression);
}
continue;
}
names.Add(_dbType.EscapeSqlIdentifier(i.Key));
values.Add(string.Format("{0}{1}", _paramPrefix, index++));
columns.Add(i.Value);
}
string outputClause = String.Empty;
if (autoIncrement)
{
outputClause = _dbType.GetInsertOutputClause(primaryKeyName);
}
commandText = string.Format("INSERT INTO {0} ({1}){2} VALUES",
_dbType.EscapeTableName(tableName),
string.Join(",", names.ToArray()),
outputClause
);
sql = new Sql(commandText);
parameters = new List<object>();
string valuesText = string.Concat("(", string.Join(",", values.ToArray()), ")");
bool isFirstPoco = true;
var parameterCounter = 0;
foreach (object poco in pacosToInsert)
{
parameterCounter++;
parameters.Clear();
foreach (PocoColumn column in columns)
{
parameters.Add(column.GetValue(poco));
}
sql.Append(valuesText, parameters.ToArray<object>());
if (isFirstPoco && pocos.Count() > 1)
{
valuesText = "," + valuesText;
isFirstPoco = false;
}
}
inserted = new List<object>();
using (var cmd = CreateCommand(_sharedConnection, sql.SQL, sql.Arguments))
{
if (!autoIncrement)
{
DoPreExecute(cmd);
cmd.ExecuteNonQuery();
OnExecutedCommand(cmd);
PocoColumn pkColumn;
if (primaryKeyName != null && pd.Columns.TryGetValue(primaryKeyName, out pkColumn))
{
foreach (object poco in pocos)
{
inserted.Add(pkColumn.GetValue(poco));
}
}
return inserted.ToArray<object>();
}
object id = _dbType.ExecuteInsert(this, cmd, primaryKeyName);
if (pacosremaining.Any())
{
return BulkInsert(pacosremaining);
}
return id;
//using (var reader = cmd.ExecuteReader())
//{
// while (reader.Read())
// {
// inserted.Add(reader[0]);
// }
//}
//object[] primaryKeys = inserted.ToArray<object>();
//// Assign the ID back to the primary key property
//if (primaryKeyName != null)
//{
// PocoColumn pc;
// if (pd.Columns.TryGetValue(primaryKeyName, out pc))
// {
// index = 0;
// foreach (object poco in pocos)
// {
// pc.SetValue(poco, pc.ChangeType(primaryKeys[index]));
// index++;
// }
// }
//}
//return primaryKeys;
}
}
finally
{
CloseSharedConnection();
}
}
catch (Exception x)
{
if (OnException(x))
throw;
return null;
}
}
Below is a BulkInsert method of PetaPoco that expands on taylonr's very clever idea to use the SQL technique of insert multiple rows via INSERT INTO tab(col1, col2) OUTPUT inserted.[ID] VALUES (#0, #1), (#2, 3), (#4, #5), ..., (#n-1, #n).
It also returns the auto-increment (identity) values of inserted records, which I don't believe happens in IvoTops' implementation.
NOTE: SQL Server 2012 (and below) has a limit of 2,100 parameters per query. (This is likely the source of the stack overflow exception referenced by Zelid's comment). You will need to manually split your batches up based on the number of columns that are not decorated as Ignore or Result. For example, a POCO with 21 columns should be sent in batch sizes of 99, or (2100 - 1) / 21. I may refactor this to dynamically split batches based on this limit for SQL Server; however, you will always see the best results by managing the batch size external to this method.
This method showed an approximate 50% gain in execution time over my previous technique of using a shared connection in a single transaction for all inserts.
This is one area where Massive really shines - Massive has a Save(params object[] things) that builds an array of IDbCommands, and executes each one on a shared connection. It works out of the box, and doesn't run into parameter limits.
/// <summary>
/// Performs an SQL Insert against a collection of pocos
/// </summary>
/// <param name="pocos">A collection of POCO objects that specifies the column values to be inserted. Assumes that every POCO is of the same type.</param>
/// <returns>An array of the auto allocated primary key of the new record, or null for non-auto-increment tables</returns>
/// <remarks>
/// NOTE: As of SQL Server 2012, there is a limit of 2100 parameters per query. This limitation does not seem to apply on other platforms, so
/// this method will allow more than 2100 parameters. See http://msdn.microsoft.com/en-us/library/ms143432.aspx
/// The name of the table, it's primary key and whether it's an auto-allocated primary key are retrieved from the attributes of the first POCO in the collection
/// </remarks>
public object[] BulkInsert(IEnumerable<object> pocos)
{
Sql sql;
IList<PocoColumn> columns = new List<PocoColumn>();
IList<object> parameters;
IList<object> inserted;
PocoData pd;
Type primaryKeyType;
object template;
string commandText;
string tableName;
string primaryKeyName;
bool autoIncrement;
if (null == pocos)
return new object[] {};
template = pocos.First<object>();
if (null == template)
return null;
pd = PocoData.ForType(template.GetType());
tableName = pd.TableInfo.TableName;
primaryKeyName = pd.TableInfo.PrimaryKey;
autoIncrement = pd.TableInfo.AutoIncrement;
try
{
OpenSharedConnection();
try
{
var names = new List<string>();
var values = new List<string>();
var index = 0;
foreach (var i in pd.Columns)
{
// Don't insert result columns
if (i.Value.ResultColumn)
continue;
// Don't insert the primary key (except under oracle where we need bring in the next sequence value)
if (autoIncrement && primaryKeyName != null && string.Compare(i.Key, primaryKeyName, true) == 0)
{
primaryKeyType = i.Value.PropertyInfo.PropertyType;
// Setup auto increment expression
string autoIncExpression = _dbType.GetAutoIncrementExpression(pd.TableInfo);
if (autoIncExpression != null)
{
names.Add(i.Key);
values.Add(autoIncExpression);
}
continue;
}
names.Add(_dbType.EscapeSqlIdentifier(i.Key));
values.Add(string.Format("{0}{1}", _paramPrefix, index++));
columns.Add(i.Value);
}
string outputClause = String.Empty;
if (autoIncrement)
{
outputClause = _dbType.GetInsertOutputClause(primaryKeyName);
}
commandText = string.Format("INSERT INTO {0} ({1}){2} VALUES",
_dbType.EscapeTableName(tableName),
string.Join(",", names.ToArray()),
outputClause
);
sql = new Sql(commandText);
parameters = new List<object>();
string valuesText = string.Concat("(", string.Join(",", values.ToArray()), ")");
bool isFirstPoco = true;
foreach (object poco in pocos)
{
parameters.Clear();
foreach (PocoColumn column in columns)
{
parameters.Add(column.GetValue(poco));
}
sql.Append(valuesText, parameters.ToArray<object>());
if (isFirstPoco)
{
valuesText = "," + valuesText;
isFirstPoco = false;
}
}
inserted = new List<object>();
using (var cmd = CreateCommand(_sharedConnection, sql.SQL, sql.Arguments))
{
if (!autoIncrement)
{
DoPreExecute(cmd);
cmd.ExecuteNonQuery();
OnExecutedCommand(cmd);
PocoColumn pkColumn;
if (primaryKeyName != null && pd.Columns.TryGetValue(primaryKeyName, out pkColumn))
{
foreach (object poco in pocos)
{
inserted.Add(pkColumn.GetValue(poco));
}
}
return inserted.ToArray<object>();
}
// BUG: the following line reportedly causes duplicate inserts; need to confirm
//object id = _dbType.ExecuteInsert(this, cmd, primaryKeyName);
using(var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
inserted.Add(reader[0]);
}
}
object[] primaryKeys = inserted.ToArray<object>();
// Assign the ID back to the primary key property
if (primaryKeyName != null)
{
PocoColumn pc;
if (pd.Columns.TryGetValue(primaryKeyName, out pc))
{
index = 0;
foreach(object poco in pocos)
{
pc.SetValue(poco, pc.ChangeType(primaryKeys[index]));
index++;
}
}
}
return primaryKeys;
}
}
finally
{
CloseSharedConnection();
}
}
catch (Exception x)
{
if (OnException(x))
throw;
return null;
}
}
Here is the code for BulkInsert that you can add to v5.01 PetaPoco.cs
You can paste it somewhere close the regular insert at line 1098
You give it an IEnumerable of Pocos and it will send it to the database
in batches of x together. The code is 90% from the regular insert.
I do not have performance comparison, let me know :)
/// <summary>
/// Bulk inserts multiple rows to SQL
/// </summary>
/// <param name="tableName">The name of the table to insert into</param>
/// <param name="primaryKeyName">The name of the primary key column of the table</param>
/// <param name="autoIncrement">True if the primary key is automatically allocated by the DB</param>
/// <param name="pocos">The POCO objects that specifies the column values to be inserted</param>
/// <param name="batchSize">The number of POCOS to be grouped together for each database rounddtrip</param>
public void BulkInsert(string tableName, string primaryKeyName, bool autoIncrement, IEnumerable<object> pocos, int batchSize = 25)
{
try
{
OpenSharedConnection();
try
{
using (var cmd = CreateCommand(_sharedConnection, ""))
{
var pd = PocoData.ForObject(pocos.First(), primaryKeyName);
// Create list of columnnames only once
var names = new List<string>();
foreach (var i in pd.Columns)
{
// Don't insert result columns
if (i.Value.ResultColumn)
continue;
// Don't insert the primary key (except under oracle where we need bring in the next sequence value)
if (autoIncrement && primaryKeyName != null && string.Compare(i.Key, primaryKeyName, true) == 0)
{
// Setup auto increment expression
string autoIncExpression = _dbType.GetAutoIncrementExpression(pd.TableInfo);
if (autoIncExpression != null)
{
names.Add(i.Key);
}
continue;
}
names.Add(_dbType.EscapeSqlIdentifier(i.Key));
}
var namesArray = names.ToArray();
var values = new List<string>();
int count = 0;
do
{
cmd.CommandText = "";
cmd.Parameters.Clear();
var index = 0;
foreach (var poco in pocos.Skip(count).Take(batchSize))
{
values.Clear();
foreach (var i in pd.Columns)
{
// Don't insert result columns
if (i.Value.ResultColumn) continue;
// Don't insert the primary key (except under oracle where we need bring in the next sequence value)
if (autoIncrement && primaryKeyName != null && string.Compare(i.Key, primaryKeyName, true) == 0)
{
// Setup auto increment expression
string autoIncExpression = _dbType.GetAutoIncrementExpression(pd.TableInfo);
if (autoIncExpression != null)
{
values.Add(autoIncExpression);
}
continue;
}
values.Add(string.Format("{0}{1}", _paramPrefix, index++));
AddParam(cmd, i.Value.GetValue(poco), i.Value.PropertyInfo);
}
string outputClause = String.Empty;
if (autoIncrement)
{
outputClause = _dbType.GetInsertOutputClause(primaryKeyName);
}
cmd.CommandText += string.Format("INSERT INTO {0} ({1}){2} VALUES ({3})", _dbType.EscapeTableName(tableName),
string.Join(",", namesArray), outputClause, string.Join(",", values.ToArray()));
}
// Are we done?
if (cmd.CommandText == "") break;
count += batchSize;
DoPreExecute(cmd);
cmd.ExecuteNonQuery();
OnExecutedCommand(cmd);
}
while (true);
}
}
finally
{
CloseSharedConnection();
}
}
catch (Exception x)
{
if (OnException(x))
throw;
}
}
/// <summary>
/// Performs a SQL Bulk Insert
/// </summary>
/// <param name="pocos">The POCO objects that specifies the column values to be inserted</param>
/// <param name="batchSize">The number of POCOS to be grouped together for each database rounddtrip</param>
public void BulkInsert(IEnumerable<object> pocos, int batchSize = 25)
{
if (!pocos.Any()) return;
var pd = PocoData.ForType(pocos.First().GetType());
BulkInsert(pd.TableInfo.TableName, pd.TableInfo.PrimaryKey, pd.TableInfo.AutoIncrement, pocos);
}
And in the same lines if you want BulkUpdate:
public void BulkUpdate<T>(string tableName, string primaryKeyName, IEnumerable<T> pocos, int batchSize = 25)
{
try
{
object primaryKeyValue = null;
OpenSharedConnection();
try
{
using (var cmd = CreateCommand(_sharedConnection, ""))
{
var pd = PocoData.ForObject(pocos.First(), primaryKeyName);
int count = 0;
do
{
cmd.CommandText = "";
cmd.Parameters.Clear();
var index = 0;
var cmdText = new StringBuilder();
foreach (var poco in pocos.Skip(count).Take(batchSize))
{
var sb = new StringBuilder();
var colIdx = 0;
foreach (var i in pd.Columns)
{
// Don't update the primary key, but grab the value if we don't have it
if (string.Compare(i.Key, primaryKeyName, true) == 0)
{
primaryKeyValue = i.Value.GetValue(poco);
continue;
}
// Dont update result only columns
if (i.Value.ResultColumn)
continue;
// Build the sql
if (colIdx > 0)
sb.Append(", ");
sb.AppendFormat("{0} = {1}{2}", _dbType.EscapeSqlIdentifier(i.Key), _paramPrefix,
index++);
// Store the parameter in the command
AddParam(cmd, i.Value.GetValue(poco), i.Value.PropertyInfo);
colIdx++;
}
// Find the property info for the primary key
PropertyInfo pkpi = null;
if (primaryKeyName != null)
{
pkpi = pd.Columns[primaryKeyName].PropertyInfo;
}
cmdText.Append(string.Format("UPDATE {0} SET {1} WHERE {2} = {3}{4};\n",
_dbType.EscapeTableName(tableName), sb.ToString(),
_dbType.EscapeSqlIdentifier(primaryKeyName), _paramPrefix,
index++));
AddParam(cmd, primaryKeyValue, pkpi);
}
if (cmdText.Length == 0) break;
if (_providerName.IndexOf("oracle", StringComparison.OrdinalIgnoreCase) >= 0)
{
cmdText.Insert(0, "BEGIN\n");
cmdText.Append("\n END;");
}
DoPreExecute(cmd);
cmd.CommandText = cmdText.ToString();
count += batchSize;
cmd.ExecuteNonQuery();
OnExecutedCommand(cmd);
} while (true);
}
}
finally
{
CloseSharedConnection();
}
}
catch (Exception x)
{
if (OnException(x))
throw;
}
}
Here's a nice 2018 update using FastMember from NuGet:
private static async Task SqlBulkCopyPocoAsync<T>(PetaPoco.Database db, IEnumerable<T> data)
{
var pd = PocoData.ForType(typeof(T), db.DefaultMapper);
using (var bcp = new SqlBulkCopy(db.ConnectionString))
using (var reader = ObjectReader.Create(data))
{
// set up a mapping from the property names to the column names
var propNames = typeof(T).GetProperties().Where(p => Attribute.IsDefined(p, typeof(ResultColumnAttribute)) == false).Select(propertyInfo => propertyInfo.Name).ToArray();
foreach (var propName in propNames)
{
bcp.ColumnMappings.Add(propName, "[" + pd.GetColumnName(propName) + "]");
}
bcp.DestinationTableName = pd.TableInfo.TableName;
await bcp.WriteToServerAsync(reader).ConfigureAwait(false);
}
}
You can just do a foreach on your records.
foreach (var record in records) {
db.Save(record);
}