Avoiding the 2100 parameter limit in LINQ to SQL - c#

In a project I am currently working on, I need to access 2 databases in LINQ in the following manner:
I get a list of all trip numbers between a specified date range from DB1, and store this as a list of 'long' values
I perform an extensive query with a lot of joins on DB2, but only looking at trips that have their trip number included in the above list.
Problem is, the trip list from DB1 often returns over 2100 items - and I of course hit the 2100 parameter limit in SQL, which causes my second query to fail. I've been looking at ways around this, such as described here, but this has the effect of essentially changing my query to LINQ-to-Objects, which causes a lot of issues with my joins
Are there any other workarounds I can do?

as LINQ-to-SQL can call stored procs, you could
have a stored proc that takes an array as a input then puts the values in a temp table to join on
likewise by taking a string that the stored proc splits
Or upload all the values to a temp table yourself and join on that table.
However maybe you should rethink the problem:
Sql server can be configured to allow query against tables in other databases (including oracle), if you are allowed this may be an option for you.
Could you use some replication system to keep a table of trip numbers updated in DB2?

Not sure whether this will help, but I had a similar issue for a one-off query I was writing in LinqPad and ended up defining and using a temporary table like this.
[Table(Name="#TmpTable1")]
public class TmpRecord
{
[Column(DbType="Int", IsPrimaryKey=true, UpdateCheck=UpdateCheck.Never)]
public int? Value { get; set; }
}
public Table<TmpRecord> TmpRecords
{
get { return base.GetTable<TmpRecord>(); }
}
public void DropTable<T>()
{
ExecuteCommand( "DROP TABLE " + Mapping.GetTable(typeof(T)).TableName );
}
public void CreateTable<T>()
{
ExecuteCommand(
typeof(DataContext)
.Assembly
.GetType("System.Data.Linq.SqlClient.SqlBuilder")
.InvokeMember("GetCreateTableCommand",
BindingFlags.Static | BindingFlags.NonPublic | BindingFlags.InvokeMethod
, null, null, new[] { Mapping.GetTable(typeof(T)) } ) as string
);
}
Usage is something like
void Main()
{
List<int> ids = ....
this.Connection.Open();
// Note, if the connection is not opened here, the temporary table
// will be created but then dropped immediately.
CreateTable<TmpRecord>();
foreach(var id in ids)
TmpRecords.InsertOnSubmit( new TmpRecord() { Value = id}) ;
SubmitChanges();
var list1 = (from r in CustomerTransaction
join tt in TmpRecords on r.CustomerID equals tt.Value
where ....
select r).ToList();
DropTable<TmpRecord>();
this.Connection.Close();
}
In my case the temporary table only had one int column, but you should be able to define whatever column(s) type you want, (as long as you have a primary key).

You may split your query or use a temporary table in database2 filled with results from database1.

Related

Syntax for calling a stored procedure in .NET Core without parameters

I have created a stored procedure that returns a recordset model which joins several different tables in a single database. I have scoured the internet for information regarding "proper syntax for calling a stored procedure from mvc 6 C# without parameters" and have learned several things.
Firstly, there used to be what looked like understandable answers, to wit: "ExecuteSqlCommand " and "ExecuteSqlCommandAsync ", which, evidently are no longer used. Their replacements are explained here: [https://learn.microsoft.com/en-us/ef/core/what-is-new/ef-core-3.x/breaking-changes#fromsql][1]. They seem to be limited to "FromSql/FromSqlRaw" (which returns a recordset model) and "ExecuteSqlRaw/ExecuteSqlRawAsync()" which returns an integer with a specified meaning.
The second thing is that, everywhere examples of "before and after" are given, the example without parameters are skipped (as in all of the MS docs).
And thirdly, all of the examples that return a recordset model with data seem tied to a table, such as:
"var students = context.Students.FromSql("GetStudents 'Bill'").ToList();" And, as stored procedures are stored in their own directories, can reference any tables, multiple tables, or even no tables, I don't understand this relationship requirement in calling them.
(such as here:
[https://www.entityframeworktutorial.net/efcore/working-with-stored-procedure-in-ef-core.aspx][2]
var students = context.Students.FromSql("GetStudents 'Bill'").ToList();)
Or maybe they are models (since in this entity framework, everything seems to have the exact same name)... But what if your stored procedure isn't returning a recordset tied to a model. Do you have to create a new model just for the output of this stored procedure? I tried that, and it didn't seem to help.
So, my fundamental question is, how do I call a stored procedure without any parameters that returns a recordset model with data?
return await _context.(what goes here?).ExecuteSqlRaw("EXEC MyStoredProcedure").ToListAsync();
return await _context.ReturnModel.ExecuteSqlRaw("EXEC? MyStoredProcedure").ToListAsync();
Updated Code:
Added Model
public class InquiryQuote
{
public Inquiry inquiry { get; set; }
public int QuoteID { get; set; } = 0;
}
Added DBSet:
public virtual DbSet<InquiryQuote> InquiryQuotes { get; set; } = null!;
And updated the calling controller:
// GET: api/Inquiries
[HttpGet]
public async Task<ActionResult<IEnumerable<InquiryQuote>>> GetInquiries()
{
//return await _context.Inquiries.ToListAsync();
//return await _context.Inquiries.Where(i => i.YNDeleted == false).ToListAsync();
// var IQ = await _context.InquiryQuotes.FromSqlRaw("GetInquiryList").ToListAsync();
var IQ = await _context.InquiryQuotes.FromSqlRaw("EXEC GetInquiryList").ToListAsync();
return Ok(IQ);
}
Both versions of "IQ" return the same results:
System.Data.SqlTypes.SqlNullValueException: Data is Null. This method or property cannot be called on Null values.
at Microsoft.Data.SqlClient.SqlBuffer.ThrowIfNull()
at Microsoft.Data.SqlClient.SqlBuffer.get_Int32()
at Microsoft.Data.SqlClient.SqlDataReader.GetInt32(Int32 i)
at lambda_method17(Closure , QueryContext , DbDataReader , Int32[] )
...
[And here is the image of the stored procedure run directly from my development site:][1]
UPDATE (And partial answer to the question in the comments):
I am using the Entity Framework, and will be performing data manipulation prior to returning the newly created InquiryQuotes model from the stored procedure to be used in several views.
Why I am getting a SQL error thrown in postman (System.Data.SqlTypes.SqlNullValueException: Data is Null. This method or property cannot be called on Null values.) when calling the stored procedure directly from visual studio returns a "dataset" as shown in my image. Does it have something to do with additional values being returned from the stored procedure that are not being accounted for, like "DECLARE #return_value Int / SELECT #return_value as 'Return Value' ", or is this just a feature of executing it from VS. Since it has no input params, where is the NULL coming from?
[1]: https://i.stack.imgur.com/RJhMr.png
I seem to have found the answer (but still don't know the why...)
I started breaking it down bit-by-bit. The procedure ran on sql, and ran remotely on Visual Studio when directly accessing sql, but not when called. So I replaced the complex stored procedure with a simple one that returned all fields from the inquiry table where the id matched an input variable (because I had LOTS) of examples for that.
Stored Procedure:
CREATE PROCEDURE [dbo].[GetInquiry]
#InquiryID int = 0
AS
BEGIN
SET NOCOUNT ON
select i.*
FROM dbo.Inquiries i
WHERE i.YNDeleted = 0 AND i.InquiryId = #InquiryID
END
And the controller method (with the InquiryQuote model modified to eliminate the "quote" requirement:
public async Task<ActionResult<IEnumerable<InquiryQuote>>> GetInquiries()
{
//return await _context.Inquiries.ToListAsync();
//return await _context.Inquiries.Where(i => i.YNDeleted == false).ToListAsync();
SqlParameter ID = new SqlParameter();
ID.Value = 0;
var IQ = _context.InquiryQuotes.FromSqlRaw("GetInquiryList {0}", ID).ToList();
//var IQ = await _context.InquiryQuotes.FromSqlRaw("dbo.GetInquiryList").ToListAsync();
return IQ;
}
And (after a bit of tweaking) it returned a JSON result of the inquiry data for the ID in Postman.
{
"inquiryId": 9,
(snip)
"ynDeleted": false
}
So, once I had something that at least worked, I added just the quote back in to this simple model and ran it again
select i.*, 0 AS Quoteid
FROM dbo.Inquiries i
LEFT JOIN dbo.Quotes q ON i.InquiryId = q.InquiryId
WHERE i.YNDeleted = 0 AND i.InquiryId = #InquiryID
(I set the QuoteID to 0, because I had no data in the Quotes table yet).
AND the Model:
[Keyless]
public class InquiryQuote
{
public Inquiry inquiry { get; set; }
public bool QuoteID{ get; set; } = 0;
}
And ran it again, and the results were astonishing:
{
inquiry:{null},
QuoteID:0
}
I still don't understand why, but, evidently it must have been because of my LEFT join of the inquiryID from the Inquiry Table left joined with a null table returned null results - but when running on SQL, results were returned... The join in sql worked and returned results, but somewhere between sql and the API, the data was being nullified...
To test this theory, I updated my InquiryQuote model to put the "inquiry" data and "quoteid" at the same level, to wit:
public class InquiryQuote
{
public int InquiryId { get; set; } = 0;
(snip)
public Boolean YNDeleted { get; set; } = false;
public int QuoteID { get; set; } = 0;
}
and the entire results set was null...
So at that point, I figured it must have something to do with that LEFT JOIN with a table with no records. So I added a blank (default) entry into that table and, voila, the data I was expecting:
{
"inquiryId": 9,
(snip)
"ynDeleted": false,
"quoteID": 0
}
So, now I have a working way to call a stored procedure with one parameter!!
I then updated the stored procedure to deal with nulls from the database as so:
select i.*, ISNULL(q.QuoteId,0) AS Quoteid
FROM dbo.Inquiries i
LEFT JOIN dbo.Quotes q ON i.InquiryId = q.InquiryId
WHERE i.YNDeleted = 0 AND i.InquiryId = #InquiryID
And now am returning correct data.
I still don't know why the stored procedure runs in sql and returns data, but returns a SQL error when run from the controller. That will require a deeper dive into the interconnectivity between the sql and the API and how errors are passed between the two. And I am pretty certain I will be able to figure out how to convert this call into one that uses no parameters.
Thank you everyone for your help.

C# Datatable String or binary data truncated [duplicate]

I have a C# code which does lot of insert statements in a batch. While executing these statements, I got "String or binary data would be truncated" error and transaction roledback.
To find out the which insert statement caused this, I need to insert one by one in the SQLServer until I hit the error.
Is there clever way to findout which statement and which field caused this issue using exception handling? (SqlException)
In general, there isn't a way to determine which particular statement caused the error. If you're running several, you could watch profiler and look at the last completed statement and see what the statement after that might be, though I have no idea if that approach is feasible for you.
In any event, one of your parameter variables (and the data inside it) is too large for the field it's trying to store data in. Check your parameter sizes against column sizes and the field(s) in question should be evident pretty quickly.
This type of error occurs when the datatype of the SQL Server column has a length which is less than the length of the data entered into the entry form.
this type of error generally occurs when you have to put characters or values more than that you have specified in Database table like in that case: you specify
transaction_status varchar(10)
but you actually trying to store
_transaction_status
which contain 19 characters. that's why you faced this type of error in this code
Generally it is that you are inserting a value that is greater than the maximum allowed value. Ex, data column can only hold up to 200 characters, but you are inserting 201-character string
BEGIN TRY
INSERT INTO YourTable (col1, col2) VALUES (#val1, #val2)
END TRY
BEGIN CATCH
--print or insert into error log or return param or etc...
PRINT '#val1='+ISNULL(CONVERT(varchar,#val1),'')
PRINT '#val2='+ISNULL(CONVERT(varchar,#val2),'')
END CATCH
For SQL 2016 SP2 or higher follow this link
For older versions of SQL do this:
Get the query that is causing the problems (you can also use SQL Profiler if you dont have the source)
Remove all WHERE clauses and other unimportant parts until you are basically just left with the SELECT and FROM parts
Add WHERE 0 = 1 (this will select only table structure)
Add INTO [MyTempTable] just before the FROM clause
You should end up with something like
SELECT
Col1, Col2, ..., [ColN]
INTO [MyTempTable]
FROM
[Tables etc.]
WHERE 0 = 1
This will create a table called MyTempTable in your DB that you can compare to your target table structure i.e. you can compare the columns on both tables to see where they differ. It is a bit of a workaround but it is the quickest method I have found.
It depends on how you are making the Insert Calls. All as one call, or as individual calls within a transaction? If individual calls, then yes (as you iterate through the calls, catch the one that fails). If one large call, then no. SQL is processing the whole statement, so it's out of the hands of the code.
I have created a simple way of finding offending fields by:
Getting the column width of all the columns of a table where we're trying to make this insert/ update. (I'm getting this info directly from the database.)
Comparing the column widths to the width of the values we're trying to insert/ update.
Assumptions/ Limitations:
The column names of the table in the database match with the C# entity fields. For eg: If you have a column like this in database:
You need to have your Entity with the same column name:
public class SomeTable
{
// Other fields
public string SourceData { get; set; }
}
You're inserting/ updating 1 entity at a time. It'll be clearer in the demo code below. (If you're doing bulk inserts/ updates, you might want to either modify it or use some other solution.)
Step 1:
Get the column width of all the columns directly from the database:
// For this, I took help from Microsoft docs website:
// https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlconnection.getschema?view=netframework-4.7.2#System_Data_SqlClient_SqlConnection_GetSchema_System_String_System_String___
private static Dictionary<string, int> GetColumnSizesOfTableFromDatabase(string tableName, string connectionString)
{
var columnSizes = new Dictionary<string, int>();
using (var connection = new SqlConnection(connectionString))
{
// Connect to the database then retrieve the schema information.
connection.Open();
// You can specify the Catalog, Schema, Table Name, Column Name to get the specified column(s).
// You can use four restrictions for Column, so you should create a 4 members array.
String[] columnRestrictions = new String[4];
// For the array, 0-member represents Catalog; 1-member represents Schema;
// 2-member represents Table Name; 3-member represents Column Name.
// Now we specify the Table_Name and Column_Name of the columns what we want to get schema information.
columnRestrictions[2] = tableName;
DataTable allColumnsSchemaTable = connection.GetSchema("Columns", columnRestrictions);
foreach (DataRow row in allColumnsSchemaTable.Rows)
{
var columnName = row.Field<string>("COLUMN_NAME");
//var dataType = row.Field<string>("DATA_TYPE");
var characterMaxLength = row.Field<int?>("CHARACTER_MAXIMUM_LENGTH");
// I'm only capturing columns whose Datatype is "varchar" or "char", i.e. their CHARACTER_MAXIMUM_LENGTH won't be null.
if(characterMaxLength != null)
{
columnSizes.Add(columnName, characterMaxLength.Value);
}
}
connection.Close();
}
return columnSizes;
}
Step 2:
Compare the column widths with the width of the values we're trying to insert/ update:
public static Dictionary<string, string> FindLongBinaryOrStringFields<T>(T entity, string connectionString)
{
var tableName = typeof(T).Name;
Dictionary<string, string> longFields = new Dictionary<string, string>();
var objectProperties = GetProperties(entity);
//var fieldNames = objectProperties.Select(p => p.Name).ToList();
var actualDatabaseColumnSizes = GetColumnSizesOfTableFromDatabase(tableName, connectionString);
foreach (var dbColumn in actualDatabaseColumnSizes)
{
var maxLengthOfThisColumn = dbColumn.Value;
var currentValueOfThisField = objectProperties.Where(f => f.Name == dbColumn.Key).First()?.GetValue(entity, null)?.ToString();
if (!string.IsNullOrEmpty(currentValueOfThisField) && currentValueOfThisField.Length > maxLengthOfThisColumn)
{
longFields.Add(dbColumn.Key, $"'{dbColumn.Key}' column cannot take the value of '{currentValueOfThisField}' because the max length it can take is {maxLengthOfThisColumn}.");
}
}
return longFields;
}
public static List<PropertyInfo> GetProperties<T>(T entity)
{
//The DeclaredOnly flag makes sure you only get properties of the object, not from the classes it derives from.
var properties = entity.GetType()
.GetProperties(System.Reflection.BindingFlags.Public
| System.Reflection.BindingFlags.Instance
| System.Reflection.BindingFlags.DeclaredOnly)
.ToList();
return properties;
}
Demo:
Let's say we're trying to insert someTableEntity of SomeTable class that is modeled in our app like so:
public class SomeTable
{
[Key]
public long TicketID { get; set; }
public string SourceData { get; set; }
}
And it's inside our SomeDbContext like so:
public class SomeDbContext : DbContext
{
public DbSet<SomeTable> SomeTables { get; set; }
}
This table in Db has SourceData field as varchar(16) like so:
Now we'll try to insert value that is longer than 16 characters into this field and capture this information:
public void SaveSomeTableEntity()
{
var connectionString = "server=SERVER_NAME;database=DB_NAME;User ID=SOME_ID;Password=SOME_PASSWORD;Connection Timeout=200";
using (var context = new SomeDbContext(connectionString))
{
var someTableEntity = new SomeTable()
{
SourceData = "Blah-Blah-Blah-Blah-Blah-Blah"
};
context.SomeTables.Add(someTableEntity);
try
{
context.SaveChanges();
}
catch (Exception ex)
{
if (ex.GetBaseException().Message == "String or binary data would be truncated.\r\nThe statement has been terminated.")
{
var badFieldsReport = "";
List<string> badFields = new List<string>();
// YOU GOT YOUR FIELDS RIGHT HERE:
var longFields = FindLongBinaryOrStringFields(someTableEntity, connectionString);
foreach (var longField in longFields)
{
badFields.Add(longField.Key);
badFieldsReport += longField.Value + "\n";
}
}
else
throw;
}
}
}
The badFieldsReport will have this value:
'SourceData' column cannot take the value of
'Blah-Blah-Blah-Blah-Blah-Blah' because the max length it can take is
16.
It could also be because you're trying to put in a null value back into the database. So one of your transactions could have nulls in them.
Most of the answers here are to do the obvious check, that the length of the column as defined in the database isn't smaller than the data you are trying to pass into it.
Several times I have been bitten by going to SQL Management Studio, doing a quick:
sp_help 'mytable'
and be confused for a few minutes until I realize the column in question is an nvarchar, which means the length reported by sp_help is really double the real length supported because it's a double byte (unicode) datatype.
i.e. if sp_help reports nvarchar Length 40, you can store 20 characters max.
Checkout this gist.
https://gist.github.com/mrameezraja/9f15ad624e2cba8ac24066cdf271453b.
public Dictionary<string, string> GetEvilFields(string tableName, object instance)
{
Dictionary<string, string> result = new Dictionary<string, string>();
var tableType = this.Model.GetEntityTypes().First(c => c.GetTableName().Contains(tableName));
if (tableType != null)
{
int i = 0;
foreach (var property in tableType.GetProperties())
{
var maxlength = property.GetMaxLength();
var prop = instance.GetType().GetProperties().FirstOrDefault(_ => _.Name == property.Name);
if (prop != null)
{
var length = prop.GetValue(instance)?.ToString()?.Length;
if (length > maxlength)
{
result.Add($"{i}.Evil.Property", prop.Name);
result.Add($"{i}.Evil.Value", prop.GetValue(instance)?.ToString());
result.Add($"{i}.Evil.Value.Length", length?.ToString());
result.Add($"{i}.Evil.Db.MaxLength", maxlength?.ToString());
i++;
}
}
}
}
return result;
}
With Linq To SQL I debugged by logging the context, eg. Context.Log = Console.Out
Then scanned the SQL to check for any obvious errors, there were two:
-- #p46: Input Char (Size = -1; Prec = 0; Scale = 0) [some long text value1]
-- #p8: Input Char (Size = -1; Prec = 0; Scale = 0) [some long text value2]
the last one I found by scanning the table schema against the values, the field was nvarchar(20) but the value was 22 chars
-- #p41: Input NVarChar (Size = 4000; Prec = 0; Scale = 0) [1234567890123456789012]
In our own case I increase the sql table allowable character or field size which is less than the total characters posted from theĀ front end. Hence that resolve the issue.
Simply Used this:
MessageBox.Show(cmd4.CommandText.ToString());
in c#.net and this will show you main query , Copy it and run in database .

How can I use more than 2100 values in an IN clause using Dapper?

I have a List containing ids that I want to insert into a temp table using Dapper in order to avoid the SQL limit on parameters in the 'IN' clause.
So currently my code looks like this:
public IList<int> LoadAnimalTypeIdsFromAnimalIds(IList<int> animalIds)
{
using (var db = new SqlConnection(this.connectionString))
{
return db.Query<int>(
#"SELECT a.animalID
FROM
dbo.animalTypes [at]
INNER JOIN animals [a] on a.animalTypeId = at.animalTypeId
INNER JOIN edibleAnimals e on e.animalID = a.animalID
WHERE
at.animalId in #animalIds", new { animalIds }).ToList();
}
}
The problem I need to solve is that when there are more than 2100 ids in the animalIds list then I get a SQL error "The incoming request has too many parameters. The server supports a maximum of 2100 parameters".
So now I would like to create a temp table populated with the animalIds passed into the method. Then I can join the animals table on the temp table and avoid having a huge "IN" clause.
I have tried various combinations of syntax but not got anywhere.
This is where I am now:
public IList<int> LoadAnimalTypeIdsFromAnimalIds(IList<int> animalIds)
{
using (var db = new SqlConnection(this.connectionString))
{
db.Execute(#"SELECT INTO #tempAnmialIds #animalIds");
return db.Query<int>(
#"SELECT a.animalID
FROM
dbo.animalTypes [at]
INNER JOIN animals [a] on a.animalTypeId = at.animalTypeId
INNER JOIN edibleAnimals e on e.animalID = a.animalID
INNER JOIN #tempAnmialIds tmp on tmp.animalID = a.animalID).ToList();
}
}
I can't get the SELECT INTO working with the list of IDs. Am I going about this the wrong way maybe there is a better way to avoid the "IN" clause limit.
I do have a backup solution in that I can split the incoming list of animalIDs into blocks of 1000 but I've read that the large "IN" clause sufferes a performance hit and joining a temp table will be more efficient and it also means I don;t need extra 'splitting' code to batch up the ids in to blocks of 1000.
Ok, here's the version you want. I'm adding this as a separate answer, as my first answer using SP/TVP utilizes a different concept.
public IList<int> LoadAnimalTypeIdsFromAnimalIds(IList<int> animalIds)
{
using (var db = new SqlConnection(this.connectionString))
{
// This Open() call is vital! If you don't open the connection, Dapper will
// open/close it automagically, which means that you'll loose the created
// temp table directly after the statement completes.
db.Open();
// This temp table is created having a primary key. So make sure you don't pass
// any duplicate IDs
db.Execute("CREATE TABLE #tempAnimalIds(animalId int not null primary key);");
while (animalIds.Any())
{
// Build the statements to insert the Ids. For this, we need to split animalIDs
// into chunks of 1000, as this flavour of INSERT INTO is limited to 1000 values
// at a time.
var ids2Insert = animalIds.Take(1000);
animalIds = animalIds.Skip(1000).ToList();
StringBuilder stmt = new StringBuilder("INSERT INTO #tempAnimalIds VALUES (");
stmt.Append(string.Join("),(", ids2Insert));
stmt.Append(");");
db.Execute(stmt.ToString());
}
return db.Query<int>(#"SELECT animalID FROM #tempAnimalIds").ToList();
}
}
To test:
var ids = LoadAnimalTypeIdsFromAnimalIds(Enumerable.Range(1, 2500).ToList());
You just need to amend your select statement to what it originally was. As I don't have all your tables in my environment, I just selected from the created temp table to prove it works the way it should.
Pitfalls, see comments:
Open the connection at the beginning, otherwise the temp table will
be gone after dapper automatically closes the connection right after
creating the table.
This particular flavour of INSERT INTO is limited
to 1000 values at a time, so the passed IDs need to be split into
chunks accordingly.
Don't pass duplicate keys, as the primary key on the temp table will not allow that.
Edit
It seems Dapper supports a set-based operation which will make this work too:
public IList<int> LoadAnimalTypeIdsFromAnimalIdsV2(IList<int> animalIds)
{
// This creates an IEnumerable of an anonymous type containing an Id property. This seems
// to be necessary to be able to grab the Id by it's name via Dapper.
var namedIDs = animalIds.Select(i => new {Id = i});
using (var db = new SqlConnection(this.connectionString))
{
// This is vital! If you don't open the connection, Dapper will open/close it
// automagically, which means that you'll loose the created temp table directly
// after the statement completes.
db.Open();
// This temp table is created having a primary key. So make sure you don't pass
// any duplicate IDs
db.Execute("CREATE TABLE #tempAnimalIds(animalId int not null primary key);");
// Using one of Dapper's convenient features, the INSERT becomes:
db.Execute("INSERT INTO #tempAnimalIds VALUES(#Id);", namedIDs);
return db.Query<int>(#"SELECT animalID FROM #tempAnimalIds").ToList();
}
}
I don't know how well this will perform compared to the previous version (ie. 2500 single inserts instead of three inserts with 1000, 1000, 500 values each). But the doc suggests that it performs better if used together with async, MARS and Pipelining.
In your example, what I can't see is how your list of animalIds is actually passed to the query to be inserted into the #tempAnimalIDs table.
There is a way to do it without using a temp table, utilizing a stored procedure with a table value parameter.
SQL:
CREATE TYPE [dbo].[udtKeys] AS TABLE([i] [int] NOT NULL)
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[myProc](#data as dbo.udtKeys readonly)AS
BEGIN
select i from #data;
END
GO
This will create a user defined table type called udtKeys which contains just one int column named i, and a stored procedure that expects a parameter of that type. The proc does nothing else but to select the IDs you passed, but you can of course join other tables to it. For a hint regarding the syntax, see here.
C#:
var dataTable = new DataTable();
dataTable.Columns.Add("i", typeof(int));
foreach (var animalId in animalIds)
dataTable.Rows.Add(animalId);
using(SqlConnection conn = new SqlConnection("connectionString goes here"))
{
var r=conn.Query("myProc", new {data=dataTable},commandType: CommandType.StoredProcedure);
// r contains your results
}
The parameter within the procedure gets populated by passing a DataTable, and that DataTable's structure must match the one of the table type you created.
If you really need to pass more that 2100 values, you may want to consider indexing your table type to increase performance. You can actually give it a primary key if you don't pass any duplicate keys, like this:
CREATE TYPE [dbo].[udtKeys] AS TABLE(
[i] [int] NOT NULL,
PRIMARY KEY CLUSTERED
(
[i] ASC
)WITH (IGNORE_DUP_KEY = OFF)
)
GO
You may also need to assign execute permissions for the type to the database user you execute this with, like so:
GRANT EXEC ON TYPE::[dbo].[udtKeys] TO [User]
GO
See also here and here.
For me, the best way I was able to come up with was turning the list into a comma separated list in C# then using string_split in SQL to insert the data into a temp table. There are probably upper limits to this, but in my case I was only dealing with 6,000 records and it worked really fast.
public IList<int> LoadAnimalTypeIdsFromAnimalIds(IList<int> animalIds)
{
using (var db = new SqlConnection(this.connectionString))
{
return db.Query<int>(
#" --Created a temp table to join to later. An index on this would probably be good too.
CREATE TABLE #tempAnimals (Id INT)
INSERT INTO #tempAnimals (ID)
SELECT value FROM string_split(#animalIdStrings)
SELECT at.animalTypeID
FROM dbo.animalTypes [at]
JOIN animals [a] ON a.animalTypeId = at.animalTypeId
JOIN #tempAnimals temp ON temp.ID = a.animalID -- <-- added this
JOIN edibleAnimals e ON e.animalID = a.animalID",
new { animalIdStrings = string.Join(",", animalIds) }).ToList();
}
}
It might be worth noting that string_split is only available in SQL Server 2016 or higher or if using Azure SQL then compatibility mode 130 or higher. https://learn.microsoft.com/en-us/sql/t-sql/functions/string-split-transact-sql?view=sql-server-ver15

String or binary data would be truncated exception when inserting data [duplicate]

I am running data.bat file with the following lines:
Rem Tis batch file will populate tables
cd\program files\Microsoft SQL Server\MSSQL
osql -U sa -P Password -d MyBusiness -i c:\data.sql
The contents of the data.sql file is:
insert Customers
(CustomerID, CompanyName, Phone)
Values('101','Southwinds','19126602729')
There are 8 more similar lines for adding records.
When I run this with start > run > cmd > c:\data.bat, I get this error message:
1>2>3>4>5>....<1 row affected>
Msg 8152, Level 16, State 4, Server SP1001, Line 1
string or binary data would be truncated.
<1 row affected>
<1 row affected>
<1 row affected>
<1 row affected>
<1 row affected>
<1 row affected>
Also, I am a newbie obviously, but what do Level #, and state # mean, and how do I look up error messages such as the one above: 8152?
From #gmmastros's answer
Whenever you see the message....
string or binary data would be truncated
Think to yourself... The field is NOT big enough to hold my data.
Check the table structure for the customers table. I think you'll find that the length of one or more fields is NOT big enough to hold the data you are trying to insert. For example, if the Phone field is a varchar(8) field, and you try to put 11 characters in to it, you will get this error.
I had this issue although data length was shorter than the field length.
It turned out that the problem was having another log table (for audit trail), filled by a trigger on the main table, where the column size also had to be changed.
In one of the INSERT statements you are attempting to insert a too long string into a string (varchar or nvarchar) column.
If it's not obvious which INSERT is the offender by a mere look at the script, you could count the <1 row affected> lines that occur before the error message. The obtained number plus one gives you the statement number. In your case it seems to be the second INSERT that produces the error.
Just want to contribute with additional information: I had the same issue and it was because of the field wasn't big enough for the incoming data and this thread helped me to solve it (the top answer clarifies it all).
BUT it is very important to know what are the possible reasons that may cause it.
In my case i was creating the table with a field like this:
Select '' as Period, * From Transactions Into #NewTable
Therefore the field "Period" had a length of Zero and causing the Insert operations to fail. I changed it to "XXXXXX" that is the length of the incoming data and it now worked properly (because field now had a lentgh of 6).
I hope this help anyone with same issue :)
Some of your data cannot fit into your database column (small). It is not easy to find what is wrong. If you use C# and Linq2Sql, you can list the field which would be truncated:
First create helper class:
public class SqlTruncationExceptionWithDetails : ArgumentOutOfRangeException
{
public SqlTruncationExceptionWithDetails(System.Data.SqlClient.SqlException inner, DataContext context)
: base(inner.Message + " " + GetSqlTruncationExceptionWithDetailsString(context))
{
}
/// <summary>
/// PArt of code from following link
/// http://stackoverflow.com/questions/3666954/string-or-binary-data-would-be-truncated-linq-exception-cant-find-which-fiel
/// </summary>
/// <param name="context"></param>
/// <returns></returns>
static string GetSqlTruncationExceptionWithDetailsString(DataContext context)
{
StringBuilder sb = new StringBuilder();
foreach (object update in context.GetChangeSet().Updates)
{
FindLongStrings(update, sb);
}
foreach (object insert in context.GetChangeSet().Inserts)
{
FindLongStrings(insert, sb);
}
return sb.ToString();
}
public static void FindLongStrings(object testObject, StringBuilder sb)
{
foreach (var propInfo in testObject.GetType().GetProperties())
{
foreach (System.Data.Linq.Mapping.ColumnAttribute attribute in propInfo.GetCustomAttributes(typeof(System.Data.Linq.Mapping.ColumnAttribute), true))
{
if (attribute.DbType.ToLower().Contains("varchar"))
{
string dbType = attribute.DbType.ToLower();
int numberStartIndex = dbType.IndexOf("varchar(") + 8;
int numberEndIndex = dbType.IndexOf(")", numberStartIndex);
string lengthString = dbType.Substring(numberStartIndex, (numberEndIndex - numberStartIndex));
int maxLength = 0;
int.TryParse(lengthString, out maxLength);
string currentValue = (string)propInfo.GetValue(testObject, null);
if (!string.IsNullOrEmpty(currentValue) && maxLength != 0 && currentValue.Length > maxLength)
{
//string is too long
sb.AppendLine(testObject.GetType().Name + "." + propInfo.Name + " " + currentValue + " Max: " + maxLength);
}
}
}
}
}
}
Then prepare the wrapper for SubmitChanges:
public static class DataContextExtensions
{
public static void SubmitChangesWithDetailException(this DataContext dataContext)
{
//http://stackoverflow.com/questions/3666954/string-or-binary-data-would-be-truncated-linq-exception-cant-find-which-fiel
try
{
//this can failed on data truncation
dataContext.SubmitChanges();
}
catch (SqlException sqlException) //when (sqlException.Message == "String or binary data would be truncated.")
{
if (sqlException.Message == "String or binary data would be truncated.") //only for EN windows - if you are running different window language, invoke the sqlException.getMessage on thread with EN culture
throw new SqlTruncationExceptionWithDetails(sqlException, dataContext);
else
throw;
}
}
}
Prepare global exception handler and log truncation details:
protected void Application_Error(object sender, EventArgs e)
{
Exception ex = Server.GetLastError();
string message = ex.Message;
//TODO - log to file
}
Finally use the code:
Datamodel.SubmitChangesWithDetailException();
Another situation in which you can get this error is the following:
I had the same error and the reason was that in an INSERT statement that received data from an UNION, the order of the columns was different from the original table. If you change the order in #table3 to a, b, c, you will fix the error.
select a, b, c into #table1
from #table0
insert into #table1
select a, b, c from #table2
union
select a, c, b from #table3
on sql server you can use SET ANSI_WARNINGS OFF like this:
using (SqlConnection conn = new SqlConnection("Data Source=XRAYGOAT\\SQLEXPRESS;Initial Catalog='Healthy Care';Integrated Security=True"))
{
conn.Open();
using (var trans = conn.BeginTransaction())
{
try
{
using cmd = new SqlCommand("", conn, trans))
{
cmd.CommandText = "SET ANSI_WARNINGS OFF";
cmd.ExecuteNonQuery();
cmd.CommandText = "YOUR INSERT HERE";
cmd.ExecuteNonQuery();
cmd.Parameters.Clear();
cmd.CommandText = "SET ANSI_WARNINGS ON";
cmd.ExecuteNonQuery();
trans.Commit();
}
}
catch (Exception)
{
trans.Rollback();
}
}
conn.Close();
}
I had the same issue. The length of my column was too short.
What you can do is either increase the length or shorten the text you want to put in the database.
Also had this problem occurring on the web application surface.
Eventually found out that the same error message comes from the SQL update statement in the specific table.
Finally then figured out that the column definition in the relating history table(s) did not map the original table column length of nvarchar types in some specific cases.
I had the same problem, even after increasing the size of the problematic columns in the table.
tl;dr: The length of the matching columns in corresponding Table Types may also need to be increased.
In my case, the error was coming from the Data Export service in Microsoft Dynamics CRM, which allows CRM data to be synced to an SQL Server DB or Azure SQL DB.
After a lengthy investigation, I concluded that the Data Export service must be using Table-Valued Parameters:
You can use table-valued parameters to send multiple rows of data to a Transact-SQL statement or a routine, such as a stored procedure or function, without creating a temporary table or many parameters.
As you can see in the documentation above, Table Types are used to create the data ingestion procedure:
CREATE TYPE LocationTableType AS TABLE (...);
CREATE PROCEDURE dbo.usp_InsertProductionLocation
#TVP LocationTableType READONLY
Unfortunately, there is no way to alter a Table Type, so it has to be dropped & recreated entirely. Since my table has over 300 fields (šŸ˜±), I created a query to facilitate the creation of the corresponding Table Type based on the table's columns definition (just replace [table_name] with your table's name):
SELECT 'CREATE TYPE [table_name]Type AS TABLE (' + STRING_AGG(CAST(field AS VARCHAR(max)), ',' + CHAR(10)) + ');' AS create_type
FROM (
SELECT TOP 5000 COLUMN_NAME + ' ' + DATA_TYPE
+ IIF(CHARACTER_MAXIMUM_LENGTH IS NULL, '', CONCAT('(', IIF(CHARACTER_MAXIMUM_LENGTH = -1, 'max', CONCAT(CHARACTER_MAXIMUM_LENGTH,'')), ')'))
+ IIF(DATA_TYPE = 'decimal', CONCAT('(', NUMERIC_PRECISION, ',', NUMERIC_SCALE, ')'), '')
AS field
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = '[table_name]'
ORDER BY ORDINAL_POSITION) AS T;
After updating the Table Type, the Data Export service started functioning properly once again! :)
When I tried to execute my stored procedure I had the same problem because the size of the column that I need to add some data is shorter than the data I want to add.
You can increase the size of the column data type or reduce the length of your data.
A 2016/2017 update will show you the bad value and column.
A new trace flag will swap the old error for a new 2628 error and will print out the column and offending value. Traceflag 460 is available in the latest cumulative update for 2016 and 2017:
https://support.microsoft.com/en-sg/help/4468101/optional-replacement-for-string-or-binary-data-would-be-truncated
Just make sure that after you've installed the CU that you enable the trace flag, either globally/permanently on the server:
...or with DBCC TRACEON:
https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql?view=sql-server-ver15
Another situation, in which this error may occur is in
SQL Server Management Studio. If you have "text" or "ntext" fields in your table,
no matter what kind of field you are updating (for example bit or integer).
Seems that the Studio does not load entire "ntext" fields and also updates ALL fields instead of the modified one.
To solve the problem, exclude "text" or "ntext" fields from the query in Management Studio
This Error Comes only When any of your field length is greater than the field length specified in sql server database table structure.
To overcome this issue you have to reduce the length of the field Value .
Or to increase the length of database table field .
If someone is encountering this error in a C# application, I have created a simple way of finding offending fields by:
Getting the column width of all the columns of a table where we're trying to make this insert/ update. (I'm getting this info directly from the database.)
Comparing the column widths to the width of the values we're trying to insert/ update.
Assumptions/ Limitations:
The column names of the table in the database match with the C# entity fields. For eg: If you have a column like this in database:
You need to have your Entity with the same column name:
public class SomeTable
{
// Other fields
public string SourceData { get; set; }
}
You're inserting/ updating 1 entity at a time. It'll be clearer in the demo code below. (If you're doing bulk inserts/ updates, you might want to either modify it or use some other solution.)
Step 1:
Get the column width of all the columns directly from the database:
// For this, I took help from Microsoft docs website:
// https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlconnection.getschema?view=netframework-4.7.2#System_Data_SqlClient_SqlConnection_GetSchema_System_String_System_String___
private static Dictionary<string, int> GetColumnSizesOfTableFromDatabase(string tableName, string connectionString)
{
var columnSizes = new Dictionary<string, int>();
using (var connection = new SqlConnection(connectionString))
{
// Connect to the database then retrieve the schema information.
connection.Open();
// You can specify the Catalog, Schema, Table Name, Column Name to get the specified column(s).
// You can use four restrictions for Column, so you should create a 4 members array.
String[] columnRestrictions = new String[4];
// For the array, 0-member represents Catalog; 1-member represents Schema;
// 2-member represents Table Name; 3-member represents Column Name.
// Now we specify the Table_Name and Column_Name of the columns what we want to get schema information.
columnRestrictions[2] = tableName;
DataTable allColumnsSchemaTable = connection.GetSchema("Columns", columnRestrictions);
foreach (DataRow row in allColumnsSchemaTable.Rows)
{
var columnName = row.Field<string>("COLUMN_NAME");
//var dataType = row.Field<string>("DATA_TYPE");
var characterMaxLength = row.Field<int?>("CHARACTER_MAXIMUM_LENGTH");
// I'm only capturing columns whose Datatype is "varchar" or "char", i.e. their CHARACTER_MAXIMUM_LENGTH won't be null.
if(characterMaxLength != null)
{
columnSizes.Add(columnName, characterMaxLength.Value);
}
}
connection.Close();
}
return columnSizes;
}
Step 2:
Compare the column widths with the width of the values we're trying to insert/ update:
public static Dictionary<string, string> FindLongBinaryOrStringFields<T>(T entity, string connectionString)
{
var tableName = typeof(T).Name;
Dictionary<string, string> longFields = new Dictionary<string, string>();
var objectProperties = GetProperties(entity);
//var fieldNames = objectProperties.Select(p => p.Name).ToList();
var actualDatabaseColumnSizes = GetColumnSizesOfTableFromDatabase(tableName, connectionString);
foreach (var dbColumn in actualDatabaseColumnSizes)
{
var maxLengthOfThisColumn = dbColumn.Value;
var currentValueOfThisField = objectProperties.Where(f => f.Name == dbColumn.Key).First()?.GetValue(entity, null)?.ToString();
if (!string.IsNullOrEmpty(currentValueOfThisField) && currentValueOfThisField.Length > maxLengthOfThisColumn)
{
longFields.Add(dbColumn.Key, $"'{dbColumn.Key}' column cannot take the value of '{currentValueOfThisField}' because the max length it can take is {maxLengthOfThisColumn}.");
}
}
return longFields;
}
public static List<PropertyInfo> GetProperties<T>(T entity)
{
//The DeclaredOnly flag makes sure you only get properties of the object, not from the classes it derives from.
var properties = entity.GetType()
.GetProperties(System.Reflection.BindingFlags.Public
| System.Reflection.BindingFlags.Instance
| System.Reflection.BindingFlags.DeclaredOnly)
.ToList();
return properties;
}
Demo:
Let's say we're trying to insert someTableEntity of SomeTable class that is modeled in our app like so:
public class SomeTable
{
[Key]
public long TicketID { get; set; }
public string SourceData { get; set; }
}
And it's inside our SomeDbContext like so:
public class SomeDbContext : DbContext
{
public DbSet<SomeTable> SomeTables { get; set; }
}
This table in Db has SourceData field as varchar(16) like so:
Now we'll try to insert value that is longer than 16 characters into this field and capture this information:
public void SaveSomeTableEntity()
{
var connectionString = "server=SERVER_NAME;database=DB_NAME;User ID=SOME_ID;Password=SOME_PASSWORD;Connection Timeout=200";
using (var context = new SomeDbContext(connectionString))
{
var someTableEntity = new SomeTable()
{
SourceData = "Blah-Blah-Blah-Blah-Blah-Blah"
};
context.SomeTables.Add(someTableEntity);
try
{
context.SaveChanges();
}
catch (Exception ex)
{
if (ex.GetBaseException().Message == "String or binary data would be truncated.\r\nThe statement has been terminated.")
{
var badFieldsReport = "";
List<string> badFields = new List<string>();
// YOU GOT YOUR FIELDS RIGHT HERE:
var longFields = FindLongBinaryOrStringFields(someTableEntity, connectionString);
foreach (var longField in longFields)
{
badFields.Add(longField.Key);
badFieldsReport += longField.Value + "\n";
}
}
else
throw;
}
}
}
The badFieldsReport will have this value:
'SourceData' column cannot take the value of
'Blah-Blah-Blah-Blah-Blah-Blah' because the max length it can take is
16.
Kevin Pope's comment under the accepted answer was what I needed.
The problem, in my case, was that I had triggers defined on my table that would insert update/insert transactions into an audit table, but the audit table had a data type mismatch where a column with VARCHAR(MAX) in the original table was stored as VARCHAR(1) in the audit table, so my triggers were failing when I would insert anything greater than VARCHAR(1) in the original table column and I would get this error message.
I used a different tactic, fields that are allocated 8K in some places. Here only about 50/100 are used.
declare #NVPN_list as table
nvpn varchar(50)
,nvpn_revision varchar(5)
,nvpn_iteration INT
,mpn_lifecycle varchar(30)
,mfr varchar(100)
,mpn varchar(50)
,mpn_revision varchar(5)
,mpn_iteration INT
-- ...
) INSERT INTO #NVPN_LIST
SELECT left(nvpn ,50) as nvpn
,left(nvpn_revision ,10) as nvpn_revision
,nvpn_iteration
,left(mpn_lifecycle ,30)
,left(mfr ,100)
,left(mpn ,50)
,left(mpn_revision ,5)
,mpn_iteration
,left(mfr_order_num ,50)
FROM [DASHBOARD].[dbo].[mpnAttributes] (NOLOCK) mpna
I wanted speed, since I have 1M total records, and load 28K of them.
This error may be due to less field size than your entered data.
For e.g. if you have data type nvarchar(7) and if your value is 'aaaaddddf' then error is shown as:
string or binary data would be truncated
You simply can't beat SQL Server on this.
You can insert into a new table like this:
select foo, bar
into tmp_new_table_to_dispose_later
from my_table
and compare the table definition with the real table you want to insert the data into.
Sometime it's helpful sometimes it's not.
If you try inserting in the final/real table from that temporary table it may just work (due to data conversion working differently than SSMS for example).
Another alternative is to insert the data in chunks, instead of inserting everything immediately you insert with top 1000 and you repeat the process, till you find a chunk with an error. At least you have better visibility on what's not fitting into the table.

Efficient ways to save C# nested objects to SQL server

I have a table structure which is nested to 5 levels with a one to many relationship going downwards.
I want to know what's an efficient way to save the this kind of data into the SQL Server. I now loop on each child object (C#) and run an insert which becomes slow if the data is large.
Is there a way to pass the C# directly to SQL in traditional ADO.NET? I have a custom framework which fires a SQL script for each insert, which picks up values from the object properties. I can't move to EF or NHirbernate as it's an existing project.
I have seen ways where C# objects can be inserted into DataTables and then passed to SQl, is that an efficient way?
Please advise.
I'm assuming that you have something like this from a database perspective
CREATE TABLE Items (ID INT -- primary key,
Name VARCHAR(MAX),
ParentID INT) -- foreign key that loops on the same table
and an object like this in C#
public class Item
{
public int ID {get; set;}
public string Name {get; set;}
public int ParentID {get; set;}
public Item Parent {get; set;}
public List<Item> Children {get; set;}
}
and you have some code that looks like:
var root = MakeMeATree();
databaseSaver.SaveToDatabase(root);
that generates an insert-per-item for every child. If you have lots of children, this can really slow up the application.
What I would use (and have used) in this case is a custom sql server type and a stored procedure to save the whole thing in a single call.
You will need to create a type that matches the table:
CREATE TYPE dbo.ItemType AS TABLE
(
ID INT,
Name VARCHAR(MAX),
ParentID INT
);
and a simple procedure that uses the type:
CREATE PROCEDURE dbo.InsertItems
(
#Items AS dbo.ItemType READONLY
)
AS
BEGIN
INSERT INTO SampleTable(ID, Name, ParentID)
SELECT ID, Name, ParentID From #Items
END
Now, that does it from the SQL Server side. Now on to the C# side. You need to do two things:
Flatten the hierarchy into a list
Sent that list as a datatable to the database
The first can be done using something like this (I use this, which is basically the same thing), with a simple
var items = root.Flatten(i => i.Children);
To do the second thing, first you need to declare the SQL Server type as a datatable:
DataTable dt = new DataTable("Items");
dt.Columns.Add("ID", typeof(int));
dt.Columns.Add("Name", typeof(string));
dt.Columns.Add("ParentID", typeof(int));
next, just fill the values:
foreach(var item in items)
{
dt.Rows.Add(item.ID, item.Name, item.ParentID);
}
and attach them to a SqlParameter, that should be of the SqlDbType.Structured type, like this:
using (var cmd = new SqlCommand("InsertItems", connection))
{
cmd.CommandType = CommandType.StoredProcedure;
var itemsParam = new SqlParameter("#Items", SqlDbType.Structured);
itemsParam .Value = dt;
cmd.Parameters.Add(itemsParam);
cmd.ExecuteNonQuery();
}
And, that should be it.
Yes this if you want your Dataset Objects to be stored in the DataBase u can go with optionslike
Create a SQL UserDefinedType object.
Fill the Object the requierd values and the read the Value in the Stored procedure AND Fill the object value in the Temp and using your business logic u can stored in Db.

Categories