I have a confusing issue. I have a WPF application that is using a local SQL Server Database. I am using Entity Framework. I'm trying to store a byte array of length 120 into a BINARY(120) column.
The data should fit, but for some reason I keep getting the error, 'String or binary data would be truncated'.
The table structure looks like this:
Column_name Type Computed Length
Id int no 4
Foo varchar no 500
Bar binary no 120
As an experiment, I tried altering the column to be a VARBINARY(MAX) column, however even in that case I still get the same error. This would usually indicate that another column could be causing the problem, however I know the problem is not the Id column, and the string I'm storing into Foo is "test string", which is obviously small enough.
The method that is inserting into the database isn't doing anything special:
public void Create(FooEntry entry)
{
_context.FooEntries.Add(entry);
_context.SaveChanges();
}
Maybe I'm missing something really obvious, but how should I go about fixing this issue?
Update:
Yes, I am quite sure the EDMX and the database table is in sync (because when I update the database table, I always refresh the edmx).
I attempted to manually insert a byte array of length 120 into the table, and did not get the error. When I manually inserted a byte array of length 121, the 'String or binary data would be truncated' error occurred, which is the correct behavior.
The byte array that I was attempting to insert while debugging the program was this value:
'0x2400320061002400310030002400510078004400310072003400390079004C006D0041004A00760049004E005700730069007900490058004F007400740052007600500031006900700078004C0044007000610032004900350038007700310059002E0071003300680033004B0064002F00430074005700'
which is a byte array of length 120.
Update 2:
I ran SQL Server Profiler and it gave me the following SQL query:
exec sp_executesql N'INSERT [dbo].[FooEntry]([Foo], [Bar])
VALUES (#0, #1)
SELECT [Id]
FROM [dbo].[FooEntry]
WHERE ##ROWCOUNT > 0 AND [Id] = scope_identity()',N'#0 varchar(500),#1 binary(120)',#0='test string',#1=0x24003200610024003100300024004100420047007400560056006A0076004D006F006500770036005000300054004D002F006B007800740075007A00490036004F004400660047004A0062004E0069004E00790041007A0051006A0058004300660072007400360051007A0043004B004400580075006500
Very weird. This query works when I execute it manually, but EF throws an error when this query is executed while the program is running.
I have finally figured out the problem, and the answer is ridiculous.
There were two databases in my project directory, DatabaseA and DatabaseB that had the same schema. Entity Framework was running queries against DatabaseA, however my EDMX was pointing to DatabaseB, and the database that was showing in my Server Explorer was DatabaseB.
This issue probably occurred while I was fixing an issue I had earlier that was causing my local SQL Server database data to be overwritten with each build.
My advice for anybody that is running into this issue is to make sure your app.config files in your Solution all have connection strings that are pointing to the same database. The way I fixed this problem was to update the connection string in my data layer:
1) I removed the connection string in my app.config file
2) In my EDMX designer, I deleted my FooEntry table and then right-clicked and selected 'Update Model from Database'. Then a dialog popped up that allowed me to create a new connection string. Through this dialog, you can browse for the correct database and Visual Studio will make the correct connection string for you.
Related
I'm having a very strange issue. I have windows service which is failing because of a SqlException: "String or binary data would be truncated." for an INSERT statement.
Now this is a fairly basic error to solve but it is not the true error. If I do a trace and run the query straight on the database - there is NO error. All of the data in WAY shorter than the restrictions on the database.
I eventually took out some of this required columns from the query hoping to generate a different error: "Cannot insert the value NULL into column 'Type'"
I don't however get this error! I am still getting "String or binary data would be truncated."
I DO get this error if I run the query from the trace straight on the DB.
Does anyone have any ideas on what could be happening here? Thanks!
Edit to add
Here's the query that is supposed to give me the cannot insert value error. The other query is the same but with more parameters:
declare #p4 int
set #p4=60029550
exec sp_executesql N'EXECUTE ssl_GetTTDVersionCallSeq 1, #TTDVersionIDVal OUTPUT
INSERT INTO TTDVersion(Task,ID) VALUES (#P0,#TTDVersionIDVal)',N'#P0 int,#TTDVersionIDVal int output',#P0=200003762,#TTDVersionIDVal=#p4 output
select #p4
Found the cause of this error. It was not at all in the query posted above. There was a trigger on the table which set the LastUpdatedBy column.
Most of the users have a 4 char user name but the user that the service was being run as didn't. The column limit was 4.
Avoid this issue:
Triggers can be problematic. They aren't immediately visible - sp_help 'table' doesn't even return them. If they error you can't even trace the query.
What I should of tried earlier:
Running the query as the user. (I wanted to but it's an admin user and someone else was using it at the time.)
Checking all columns info vs what was in them and what was in the query. Then to check how that info is getting there. In this case a default constraint probably would of given same issue.
So I have a table with a column of type VARCHAR (100) and I'm wondering if there's a way to configure SQL Server 2012 (T-SQL) so that if a transaction tries to submit a string of 101+ characters then it takes the first 100.
Is this possible, or should I be doing the truncation in the C# side of things ???
Normally, SQL Server will present an error on any attempt to insert more data into a field than it can hold
String or binary data would be truncated. The statement has been terminated.
SQL Server will not permit a silent truncation of data just because the column is too small to accept the data. But there are other ways that SQL Server can truncate data that is about to be inserted into a table that will not generate any form of error or warning.
By default, ANSI_WARNINGS are turned on, and certain activities such as creating indexes on computed columns or indexed views require that they be turned on. But if they are turned off, SQL Server will truncate the data as needed to make it fit into the column. The ANSI_WARNINGS setting for a session can be controlled by
SET ANSI_WARNINGS { ON|OFF }
Unlike with an insert into a table, SQL Server will quietly cut off data that is being assigned to a variable, regardless of the status of ANSI_WARNINGS. For instance:
declare #smallString varchar(5)
declare #testint int
set #smallString = 'This is a long string'
set #testint = 123.456
print #smallString
print #testint
Results is:
This
123
This can occasionally show itself in subtle ways since passing a value into a stored procedure or function assigns it to the parameter variables and will quietly do a conversion. One method that can help guard against this situation is to give any parameter that will be directly inserted into a table a larger datatype than the target column so that SQL Server will raise the error, or perhaps to then check the length of the parameter and have custom code to handle it when it is too long.
For instance, if a stored procedure will use a parameter to insert data into a table with a column that is varchar(10), make the parameter varchar(15). Then if the data that is passed in is too long for the column, it will rollback and raise a truncation error instead of silently truncating and inserting. Of course, that runs the risk of being misleading to anyone who looks at the stored procedures header information without understanding what was done.
Source: Silent Truncation of SQL Server Data Inserts
Do this on code level. When you are inserting the current field check field length and Substring it.
string a = "string with more than 100 symbols";
if(a.Length > 100)
a = a.Substring(0, 100);
After that you are adding a as sql parameter to the insert query.
The other way is to do it in the query, but again I don't advice you to do that.
INSERT INTO Table1('YourColumn') VALUES(LEFT(RTRIM(stringMoreThan100symbols), 100))
LEFT is cutting the string and RTRIM is performing Trim operation of the string.
My suggestion would be to make the application side responsible for validating the input before calling any DB operation.
SQL Server silently truncates any varchars you specify as stored procedure parameters to the length of the varchar. So you should try considering stored procedures for you requirements. So it will get handled automatically.
If you have entity classes (not necessarily from EF) you can use StringLength(your field length) attribute to do this.
I am developing a project using VS 2008 and SQL-Server 2005.
I have used varchar(150)-field for saving a normal string, which have to save
softcopy
hardcopy
both (softcopy & hardcopy)
It works when saving as softcopy or hardcopy but for both (softcopy & hardcopy) it throws the following error:
string or binary data would be truncate. The statement has been terminated.
When I restart the application after this error had occurred everything works perferctly.
I tried to use Nvarchar(Max) for the same field but the error was all the same.
Please give me suggestion to avoid this error.
The error says you are updating the column with larger size than what it can accomodate. Check for blank space in the column value
This happens if you are trying to insert too much data into a field that has a limited size, in your case 150.
The error string or binary data would be truncate. The statement has been terminated comes when the size is exceeded. Try to update your size from varchar(150) to varchar(500).
Or check your data length in your query or stored procedure where you are assigning value to the filed saved in db (Ref: as you change it to nvarchar(max) but its not working).
This exception occures when your datatype,size does not match with database field attributes.
It may be because your column length is less than your inserting data.
so if you increase your column length to that which can occupy the data you input then issue will get resolved.
If you use Nvarchar (Unicode character data) data type then you need to insert the data with N' as a Prefix like this:
Set #var = N'Hello World'
Given:
A very large XML file that is loaded into a table using the nvarchar(max) datatype. This results in doubling the size of the data (probably due to SQL Server encoding to unicode) and then later on we read the file from the table, parse it and do a bulk insert into other tables in the database.
Problem:
On the development sever, this works fine and there are no issues. However, upon attempting to bulk insert on a production server, I receive the following error:
Exception:System.InvalidOperationException:
The given value of type String from
the data source cannot be converted to
type nvarchar of the specified target
column. --->
System.InvalidOperationException:
String or binary data would be
truncated.
A couple of peculiar things I have noticed:
When ftp-ing an ANSI version of the Xml file (to be read later by the web app) it adds a few bytes to the file and then DOUBLES in size when inserted into our table. When ftp-ing a unicode version, the bytes remain the same but it also DOUBLES and then fails miserably
b e c a u s e t h e d a t a s t a r t s t o l o o k l i k e t h i s.
We ruled out bad data by stripping down the XML to one record under the root. Development handled it, production did not.
Something MUST be different between the configuration in our developement and production servers but we can't figure it out. Collation is the same by the way.
Any help would be greatly appreciated!
EDIT: An Update: We tried reading the file into the XmlDocument object directly from the server and bypassing the process of storing it to the db. No change in behavior.
Second Update: We ruled out the FTP process (maybe?) by copying the file over and then BACK (file size shinks by a few bytes but we get those bytes back upon copying it back over).
The "truncated" warning suggests to me that in production the column is not, in fact, max - but rather something like nvarchar(4000) (the old maximum before you had to go to ntext).
Verify that the column is in fact max.
As a side note, if you are only storing the data, varbinary(max) would be preferred - it will avoid the doubling etc. And if you are inspecing the data, xml might be preferred.
Since this was a new instance of the application, dropping the two tables and re-adding them fixed the problem (this was done using SQL Compare).
This was how I solved the problem but I believe Marc Gravell is on to something.
The collation of the column is what matters. The collation of the table, database, and even the collation setting of the SQL Server itself simply define what default collation will be used the next time a new column is created.
As you can imagine, its not uncommon to end up with single columns set to the wrong collation value.
Pinal Dave has several useful scripts on his blog, including this one which allows you to see the current collation settings of columns:
/* Find Collation of SQL Server Database */
SELECT DATABASEPROPERTYEX('AdventureWorks', 'Collation')
GO
/* Find Collation of SQL Server Database Table Column */
USE AdventureWorks
GO
SELECT name, collation_name
FROM sys.columns
WHERE OBJECT_ID IN (SELECT OBJECT_ID
FROM sys.objects
WHERE type = 'U'
AND name = 'Address')
AND name = 'City'
Also a very comprehensive follow-up post with an entire set of scripts (written by Brian Cidern) that allow you to identify and resolve collation conflicts.
I have a simple problem that reads an Excel file (using interop) and fills an MSSQL database file with some data extracted from it. That's fine so far.
I have a Shops table with the following fields:
ID: int, auto-generated, auto-sync: on insert
Name: string
Settlement: string
County: string
Address: string
I read the excel file, and then create a new Shops object and set the Name, Settlement, County and Address properties and I call Shops.InsertOnSubmit() with the new Shops object.
After this I have to reset the database (at least, the table), for which the easiest way I found was to call the DeleteDatabase() method and then call CreateDatabase() again.
The problem is, that after the first reset, when I try to fill the table again, I get the exception: The database generated a key that is already in use.
Additionally, from that point on, I'm unable to use that database file, because DatabaseExists() returns FALSE, but when I call the CreateDatabase() method, it throws an exception, that the database already exists (although the data files don't exist).
What am I doing wrong?
Thank you very much in advance!
It sounds like you are re-using the data-context beyond what is wise. Try disposing and re-creating the data-context after deleting the database.
I suspect the problem is that the identity manager is still tracking objects (destroying and recreating the database is such an edge-case that I think we can forgive it for not resetting itself here).
I encountered this error. I had a log table, with an identity. I was truncating the log, while my application was running. What happened was the DB would start the identity column over again when I truncated, however the data context I was using to log still had objects it was tracking with the same key.
I encountered this error because I was using a custom stored procedure for the insert with a table that had an identity column, but I had forgotten to "SET #Id = SCOPE_IDENTITY" at the end of my sproc.
I wasn't actually using the resulting identity value so the problem only showed up when I inserted two or more rows. Tricky bug.