How to set ExecuteSqlCommand with Table variable - c#

I have a query I would like to run via C# application. There is no option to do this outside of the application. I have the following code:
var keyGroupsToCleanUp = new List<string>
{
"Address",
"Manufacturer",
"Product",
"Customer",
"Picture",
"Category",
"Vendor",
"SS_A_Attachment",
"SS_A_AttachmentDownload",
"SS_MAP_EntityMapping",
"SS_MAP_EntityWidgetMapping",
};
foreach (var keyGroup in keyGroupsToCleanUp)
{
_databaseFacade.ExecuteSqlCommand($#"
DELETE
FROM GenericAttribute
WHERE KeyGroup = {keyGroup} AND [Key] = 'CommonId'
AND EntityId NOT IN (SELECT Id FROM [{keyGroup}]);
");
}
I want to loop through each name in the List and run the below query for each of them. When I try to do this, I receive the following error:
System.Data.SqlClient.SqlException (0x80131904): Invalid object name '#p1'.
From what I have gathered after searching online, this is because a Table name cannot be a string. You have to declare a variable and use this variable for the table name. I learned that a Table variable has columns that need to be declared and felt a wave of dread wash over me. None of these tables have the same column structure.
Is what I am trying to do possible? If so, how can I do it?
The GenericAttributes table is one large table that consists of six columns.
When I joined the project that this is being used on it had already been used to the point where it was irreplacable. You can save additional data for a database table in here by specifying the KeyGroup as the Database table. We have a table called "Address" and we save additional data in the GenericAttributes table for the Address (It does not make sense, I know). This causes a lot of issues because a relational database is not meant for this. The query I have written above looks for rows in the GenericAttributes Table that are now detached. For example, the EntityId 0 does not exist as an Id in Address, so it would be returned here. That row must then be deleted, because it is linked to a non-existant entityId.
This is an example of a query that would achieve that:
// Address
_databaseFacade.ExecuteSqlCommand(#"
DELETE
FROM GenericAttribute
WHERE KeyGroup = 'Address' AND [Key] = 'CommonId'
AND EntityId NOT IN (SELECT Id FROM [Address]);
");
I have to do this for 11 tables, so I wanted to make it a bit easier to do. Every query is written in the same way. The only thing that changes is the KeyGroup and the table that it looks for. These will both always have the same name.
Here is an example of another call for Products. They are the same, the only difference is the KeyGroup and the Table in the NOT IN statement.
// Product
_databaseFacade.ExecuteSqlCommand(#"
DELETE
FROM GenericAttribute
WHERE KeyGroup = 'Product' AND [Key] = 'CommonId'
AND EntityId NOT IN (SELECT Id FROM Product);
");

To ensure there is no injection vulnerability, you can use dynamic SQL with QUOTENAME
_databaseFacade.ExecuteSqlRaw(#"
DECLARE #sql nvarchar(max) = N'
DELETE
FROM GenericAttribute
WHERE KeyGroup = #keyGroup AND [Key] = ''CommonId''
AND EntityId NOT IN (SELECT Id FROM ' + {0} + ');
';
EXEC sp_executesql #sql,
N'#keyGroup nvarchar(100)',
#keyGroup = {0};
", keyGroup);
Note how ExecuteSqlRaw will interpolate the string. Do not interpolate it yourself with $

At a guess, you're using Entity Framework Core. The ExecuteSqlCommand method accepts a FormattableString, and converts any placeholders into command parameters. But your placeholders appear to be column/table names, which cannot be passed as parameters.
Since there's also an overload which accepts a string, which has different behaviour, this method has been marked as obsolete, and replaced by ExecuteSqlInterpolated and ExecuteSqlRaw.
Assuming none of your values can be influenced by the user, and you're happy that you're not going to introduce a SQL Injection vulnerability, you can use ExecuteSqlRaw instead:
_databaseFacade.ExecuteSqlRaw($#"
DELETE
FROM GenericAttribute
WHERE KeyGroup = [{keyGroup}] AND [Key] = 'CommonId'
AND EntityId NOT IN (SELECT Id FROM [{keyGroup}]);
");

Try following:
foreach (var keyGroup in keyGroupsToCleanUp)
{
var sql = #"DELETE FROM GenericAttribute
WHERE KeyGroup = #Group
AND [Key] = 'CommonId'
AND EntityId NOT IN (SELECT Id FROM #Group)"; // Or [#Group], depends on schema
_databaseFacade.ExecuteSqlCommand(
sql,
new SqlParameter("#Group", keyGroup));
This code assumes, that ExecuteSqlCommand in your facade follows standard Microsoft pattern (same overrides as Microsoft's ones).

Related

Default values doesn't set in SQL Server table when creating by Entity Framework mode

I have lots of tables that contain default values, such as CreatedDateTime (getutcdate()). But right now, the value 0001-01-01 00:00:00.0000000 gets stored instead.
https://stackoverflow.com/a/35093135/7731479 --> that is not effective, I have to do it for each table manually for every database model update (edmx). How can I update all StoreGeneratedPattern to Computed automatically? Or why it does not takes computed automatically?
https://stackoverflow.com/a/43400053/7731479 --> ado.net generates all properties and I can't generate again CreatedDateTime.
Are there any automatic solution?
I am using Entity Framework and ado.net.
Person person = new Person()
{
Id = id,
Name = name,
};
AddToPerson(person);
SaveChanges();
I want to use above. I don't want use the following and assign CreatedDeteTime again because it is assigned in MSSQL with default value getutcdate().
Person person = new Person()
{
Id = id,
Name = name,
CreatedDeteTime = DateTime.UtcNow;
};
AddToPerson(person);
SaveChanges();
The configured default constraint of the SQL Server table will only be applied if you have a SQL INSERT statement that omits the column in question.
So if you insert
INSERT INTO dbo.Person(Id, Name) VALUES (42, "John Doe")
--> then your CreatedDateTime will automatically be set to the GETUTCDATE() value.
Unfortunately, if you have mapped this column in your EF model class, then this is not what happens. If you create an instance of Person in your C# code, and the CreatedDateTime column is in fact part of the model class, then EF will use something like this to insert the new person:
INSERT INTO dbo.Person(Id, Name, CreatedDateTime) VALUES (42, "John Doe", NULL)
and since now NULL is in fact provided for the CreatedDateTime column, that's the value that will be stored - or maybe it's an empty string - no matter what, the column is specified in the INSERT statement and thus the configured default constraint is not applied.
So if you want to let SQL Server kick in with the defaults, you need to make sure not to provide the column(s) in question in the INSERT statement at all. You can do this by:
having a separate model class just for inserts, which does not include those columns in question - e.g. have a NewPerson entity, that also maps to the Person table, but only consists of Name and ID for instance. Since those properties aren't there, EF cannot and will not generate an INSERT statement with them - so then the SQL Server default constraints will kick in
map the INSERT method to a SQL Server stored procedure and handle the inserting inside that procedure, by explicitly not specifying those columns you want to have take on default values
May be I'm wrong, but I have a question.
If you need to save a default date in your DB Table, why you're trying to save another date from programm level? I mean, it's easy to create a procedure and on the procedure level save the date. Something like (select getdate()...).
I have found two solutions:
1- This solution solve for all entities that has same property such as CreatedDateTime
public partial class MyEntities : ObjectContext
{
public override int SaveChanges(SaveOptions options)
{
this.DetectChanges();
foreach (var insert in this.ObjectStateManager.GetObjectStateEntries(System.Data.EntityState.Added))
{
if (insert.Entity.GetType().GetProperty("CreatedDateTime") != null && insert.Entity.GetType().GetProperty("CreatedDateTime").GetType().Name == "DateTime" && (DateTime)(insert.Entity.GetType().GetProperty("CreatedDateTime").GetValue(insert.Entity)) == DateTime.Parse("0001-01-01 00:00:00.0000000"))
insert.Entity.GetType().GetProperty("CreatedDateTime").SetValue(insert.Entity, DateTime.UtcNow, null);
}
return base.SaveChanges(options);
}
}
referance: https://stackoverflow.com/a/5965743/7731479
2-
public partial class Person
{
public Person()
{
this.CreatedDateTime = DateTime.UtcNow;
}
}
referance : DB default value ignored when creating Entity Framework model

How to filter a column that starts with a given string using sql parameters and still be able to fully take advantage of an index on that column?

Let's say I have a table with the following definition
CREATE TABLE Person (
...
FirstName NVARCHAR(255),
...
);
CREATE INDEX IX_FirstName ON Person (FirstName);
If I want to get all persons with first name that starts with "Foo" I can wrote the following query :
SELECT * FROM Person WHERE FirstName LIKE "Foo%"
The following query is very fast. SQL Server is able to do an index seek on IX_FirstName.
However, because the search term come directly from user input, I have decided to use SQL value parameters :
string sql = "SELECT * FROM Person WHERE FirstName LIKE Concat(#searchterm,'%')";
SqlCommand cmd = new SqlCommand(sql)
cmd.Parameters.AddWithValue("#searchterm", "Foo")
This is (I think) the equivalent of this :
DECLARE #searchterm NVARCHAR(255)
SET #searchterm = 'Foo'
SELECT * FROM Person WHERE FirstName LIKE Concat(#searchterm,'%')
This result in a index scan (which is slow).
AFAIK because parameter value is not a part of the query, SQL Server is not able to perform parameter sniffing. The search term could be '%Foo%', which make it impossible to use index.
I have also tried the following alternative but it does not help (index scan is used).
SELECT * FROM Person WHERE CHARINDEX(#searchterm, DocumentName) = 1

How can I use more than 2100 values in an IN clause using Dapper?

I have a List containing ids that I want to insert into a temp table using Dapper in order to avoid the SQL limit on parameters in the 'IN' clause.
So currently my code looks like this:
public IList<int> LoadAnimalTypeIdsFromAnimalIds(IList<int> animalIds)
{
using (var db = new SqlConnection(this.connectionString))
{
return db.Query<int>(
#"SELECT a.animalID
FROM
dbo.animalTypes [at]
INNER JOIN animals [a] on a.animalTypeId = at.animalTypeId
INNER JOIN edibleAnimals e on e.animalID = a.animalID
WHERE
at.animalId in #animalIds", new { animalIds }).ToList();
}
}
The problem I need to solve is that when there are more than 2100 ids in the animalIds list then I get a SQL error "The incoming request has too many parameters. The server supports a maximum of 2100 parameters".
So now I would like to create a temp table populated with the animalIds passed into the method. Then I can join the animals table on the temp table and avoid having a huge "IN" clause.
I have tried various combinations of syntax but not got anywhere.
This is where I am now:
public IList<int> LoadAnimalTypeIdsFromAnimalIds(IList<int> animalIds)
{
using (var db = new SqlConnection(this.connectionString))
{
db.Execute(#"SELECT INTO #tempAnmialIds #animalIds");
return db.Query<int>(
#"SELECT a.animalID
FROM
dbo.animalTypes [at]
INNER JOIN animals [a] on a.animalTypeId = at.animalTypeId
INNER JOIN edibleAnimals e on e.animalID = a.animalID
INNER JOIN #tempAnmialIds tmp on tmp.animalID = a.animalID).ToList();
}
}
I can't get the SELECT INTO working with the list of IDs. Am I going about this the wrong way maybe there is a better way to avoid the "IN" clause limit.
I do have a backup solution in that I can split the incoming list of animalIDs into blocks of 1000 but I've read that the large "IN" clause sufferes a performance hit and joining a temp table will be more efficient and it also means I don;t need extra 'splitting' code to batch up the ids in to blocks of 1000.
Ok, here's the version you want. I'm adding this as a separate answer, as my first answer using SP/TVP utilizes a different concept.
public IList<int> LoadAnimalTypeIdsFromAnimalIds(IList<int> animalIds)
{
using (var db = new SqlConnection(this.connectionString))
{
// This Open() call is vital! If you don't open the connection, Dapper will
// open/close it automagically, which means that you'll loose the created
// temp table directly after the statement completes.
db.Open();
// This temp table is created having a primary key. So make sure you don't pass
// any duplicate IDs
db.Execute("CREATE TABLE #tempAnimalIds(animalId int not null primary key);");
while (animalIds.Any())
{
// Build the statements to insert the Ids. For this, we need to split animalIDs
// into chunks of 1000, as this flavour of INSERT INTO is limited to 1000 values
// at a time.
var ids2Insert = animalIds.Take(1000);
animalIds = animalIds.Skip(1000).ToList();
StringBuilder stmt = new StringBuilder("INSERT INTO #tempAnimalIds VALUES (");
stmt.Append(string.Join("),(", ids2Insert));
stmt.Append(");");
db.Execute(stmt.ToString());
}
return db.Query<int>(#"SELECT animalID FROM #tempAnimalIds").ToList();
}
}
To test:
var ids = LoadAnimalTypeIdsFromAnimalIds(Enumerable.Range(1, 2500).ToList());
You just need to amend your select statement to what it originally was. As I don't have all your tables in my environment, I just selected from the created temp table to prove it works the way it should.
Pitfalls, see comments:
Open the connection at the beginning, otherwise the temp table will
be gone after dapper automatically closes the connection right after
creating the table.
This particular flavour of INSERT INTO is limited
to 1000 values at a time, so the passed IDs need to be split into
chunks accordingly.
Don't pass duplicate keys, as the primary key on the temp table will not allow that.
Edit
It seems Dapper supports a set-based operation which will make this work too:
public IList<int> LoadAnimalTypeIdsFromAnimalIdsV2(IList<int> animalIds)
{
// This creates an IEnumerable of an anonymous type containing an Id property. This seems
// to be necessary to be able to grab the Id by it's name via Dapper.
var namedIDs = animalIds.Select(i => new {Id = i});
using (var db = new SqlConnection(this.connectionString))
{
// This is vital! If you don't open the connection, Dapper will open/close it
// automagically, which means that you'll loose the created temp table directly
// after the statement completes.
db.Open();
// This temp table is created having a primary key. So make sure you don't pass
// any duplicate IDs
db.Execute("CREATE TABLE #tempAnimalIds(animalId int not null primary key);");
// Using one of Dapper's convenient features, the INSERT becomes:
db.Execute("INSERT INTO #tempAnimalIds VALUES(#Id);", namedIDs);
return db.Query<int>(#"SELECT animalID FROM #tempAnimalIds").ToList();
}
}
I don't know how well this will perform compared to the previous version (ie. 2500 single inserts instead of three inserts with 1000, 1000, 500 values each). But the doc suggests that it performs better if used together with async, MARS and Pipelining.
In your example, what I can't see is how your list of animalIds is actually passed to the query to be inserted into the #tempAnimalIDs table.
There is a way to do it without using a temp table, utilizing a stored procedure with a table value parameter.
SQL:
CREATE TYPE [dbo].[udtKeys] AS TABLE([i] [int] NOT NULL)
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[myProc](#data as dbo.udtKeys readonly)AS
BEGIN
select i from #data;
END
GO
This will create a user defined table type called udtKeys which contains just one int column named i, and a stored procedure that expects a parameter of that type. The proc does nothing else but to select the IDs you passed, but you can of course join other tables to it. For a hint regarding the syntax, see here.
C#:
var dataTable = new DataTable();
dataTable.Columns.Add("i", typeof(int));
foreach (var animalId in animalIds)
dataTable.Rows.Add(animalId);
using(SqlConnection conn = new SqlConnection("connectionString goes here"))
{
var r=conn.Query("myProc", new {data=dataTable},commandType: CommandType.StoredProcedure);
// r contains your results
}
The parameter within the procedure gets populated by passing a DataTable, and that DataTable's structure must match the one of the table type you created.
If you really need to pass more that 2100 values, you may want to consider indexing your table type to increase performance. You can actually give it a primary key if you don't pass any duplicate keys, like this:
CREATE TYPE [dbo].[udtKeys] AS TABLE(
[i] [int] NOT NULL,
PRIMARY KEY CLUSTERED
(
[i] ASC
)WITH (IGNORE_DUP_KEY = OFF)
)
GO
You may also need to assign execute permissions for the type to the database user you execute this with, like so:
GRANT EXEC ON TYPE::[dbo].[udtKeys] TO [User]
GO
See also here and here.
For me, the best way I was able to come up with was turning the list into a comma separated list in C# then using string_split in SQL to insert the data into a temp table. There are probably upper limits to this, but in my case I was only dealing with 6,000 records and it worked really fast.
public IList<int> LoadAnimalTypeIdsFromAnimalIds(IList<int> animalIds)
{
using (var db = new SqlConnection(this.connectionString))
{
return db.Query<int>(
#" --Created a temp table to join to later. An index on this would probably be good too.
CREATE TABLE #tempAnimals (Id INT)
INSERT INTO #tempAnimals (ID)
SELECT value FROM string_split(#animalIdStrings)
SELECT at.animalTypeID
FROM dbo.animalTypes [at]
JOIN animals [a] ON a.animalTypeId = at.animalTypeId
JOIN #tempAnimals temp ON temp.ID = a.animalID -- <-- added this
JOIN edibleAnimals e ON e.animalID = a.animalID",
new { animalIdStrings = string.Join(",", animalIds) }).ToList();
}
}
It might be worth noting that string_split is only available in SQL Server 2016 or higher or if using Azure SQL then compatibility mode 130 or higher. https://learn.microsoft.com/en-us/sql/t-sql/functions/string-split-transact-sql?view=sql-server-ver15

How to achieve partial update in Entity Framework 5/6?

I am working on Entity framework with database first approach and I came across below issue.
I have a Customer table with columns col1, col2, col3 ,....,col8. I have created an entity for this table and this table has around 100 records already. Out of above 8 columns, col4 is marked as Non-null.
Class Customer
{
member col1;
member col2;
member col3;
member col4;
.
.
member col8;
}
class Main
{
//main logic to read data from database using EF
Customer obj = object of Customerwith values assigned to col1,col2 and col3 members
obj.col2=some changed value.
DBContext.SaveChanges(); //<- throws an error stating it is expecting value of col4.
}
In my application, I am trying to read the one of the record using the stored procedure using EF and stored procedure only returns col1,col2 and col3.
I am trying to save the modified value of col2 and trying to save back to database using DBContext. But it thows an error stating value of required field col4 is not provided.
FYI: I have gone through couple of forums and question and option to go with disabled verfication on SaveChanges is not feasible for me.
Is there any other way through which I can achieve partial update?
I guess EntityFramework.Utilities satisfies your conditions.
This code:
using (var db = new YourDbContext())
{
db.AttachAndModify(new BlogPost { ID = postId }).Set(x => x.Reads, 10);
db.SaveChanges();
}
will generate single SQL command:
exec sp_executesql N'UPDATE [dbo].[BlogPosts]
SET [Reads] = #0
WHERE ([ID] = #1)
',N'#0 int,#1 int',#0=10,#1=1
disabled verfication on SaveChanges is not feasible for me
Sure it is. You even have to disable validation on Save. But then you can't mark the whole entity as modified, which I think you did. You must mark individual properties as modified:
var mySmallCustomer = someService.GetCustomer(); // from sproc
mySmallCustomer.col2 = "updated";
var myLargeCustomer = new Customer();
context.Customers.Attach(myLargeCustomer);
Entry(myLargeCustomer).CurrentValues.SetValues(mySmallCustomer);
// Here it comes:
Entry(myLargeCustomer).Property(c => c.col2).IsModified = true;
context.Configuration.ValidateOnSaveEnabled = false;
context.SaveChanges();
So you see it's enough to get the "small" customer. From this object you create a stub entity (myLargeCustomer) that is used for updating the one property.

How to retrieve inserted identity in migration of FluentMigrator

During a migration how do i Insert into my table, then retrieve the ID, and then use it to insert a related data in another table.
what i have now is an hardoced ID to insert, but I don't know what it's gonna be when i'll run the migration.
var contactId = 2;
var phoneNumber = 2;
Insert.IntoTable("Contacts")
.WithIdentityInsert()
.Row(new
{
Id = contactId,
TimeZoneId = contact.TimeZoneId,
contact.CultureId,
Type = (byte)(int)contact.Type,
Email = contact.Email.ToString(),
EntityId = entityId
});
Insert.IntoTable("PhoneNumbers")
.WithIdentityInsert()
.Row(new
{
Id = phoneNumberId,
phone.Number,
Type = (byte)(int)phone.Type,
ContactId = contactId
});
I'd like to be able to retrieve the inserted ID and use it for the second insert instead of harcoding it.
I'm using SQL Server if it's any help...
I Thought this would be trivial, but seems like it's not, after googling for it, and not seing any answers here.
You are able to manually insert the Id by chaining .WithInsertIdentity() after your .Row() call.
This will let you keep it in memory for use within other objects as Foreign Keys. Unfortunately, FluentMigrator doesn't actually execute any SQL until after all code within the Up() or Down() methods finish executing.
I use Execute.Sql() with ##IDENTITY in sql-query for same cases

Categories