We have a textarea in which the users are able to write anything, it is a dynamic view which will be loaded later. When the user is ready to save it, the textarea is sent so be saved in the database.
The problem is, sometimes it works and sometimes it doesn't. The times it doesn't work is because we received:
Incorrect syntax near '?'.
We have replaced all single quotes with two single quotes, and we have made sure that it reaches the database correctly.
ALTER STORED PROCEDURE dbo.SaveHtml #htmlValue NVARCHAR(MAX)
AS
DECLARE #SQL_QUERY NVARCHAR(MAX);
SET #SQL_QUERY = N'UPDATE TEMP_TABLE SET BLOCK_1 = N{VALUE} WHERE ID = 12';
SET #SQL_QUERY = REPLACE(#SQL_QUERY,'{VALUE}', ISNULL(#htmlValue,NULL));
EXECUTE (#SQL_QUERY);
Basically that is what it does, we create a dynamic SQL because there are parameters which are dynamic, but for this example I omitted it.
It works, but sometimes when the user copies and pastes data from Word, the string arrives with characters like иĆÊô∀⠢Ĉ!ࠀސȁސ﷿ ױ﹗£ਮշωԊӍԊӍ̳¡﹩fȒ@† ™ IJ囌垐輈č靀ミ, and later we received the indicated error.
Even sometimes it arrives with those characters (not necessarily those exactly or in the same order) and it is saved successfully.
For the example mentioned if I delete the first ސ character the string is saved successfully.
'<p class="MsoNormal" style="margin-top:0cm;margin-right:0cm;margin-bottom:0cm;
margin-left:36.2pt;text-align:justify;text-indent:-18.0pt;line-height:normal;иĆÊô∀⠢Ĉ!ࠀސȁސ﷿ ױ﹗£ਮշωԊӍԊӍ̳¡﹩fȒ@† ™ IJ囌垐輈č靀ミ...'
When I debug to see the dynamic SQL and I print it, it looks like:
UPDATE TEMP_TABLE SET BLOCK_1 = N'<div class="note-editable" contenteditable="true" role="textbox" aria-multiline="true" ...' WHERE ...
As these are the strings that we saved.
This is currently in production because it works, but the times that it does not work are exceptional. When Chinese, Arabic and more special characters arrive it becomes a lottery that our code works.
How could I solve this problem?
You are not quoting your string, nor treating it as a NVARCHAR. Instead of this:
SET #SQL_QUERY = REPLACE(#SQL_QUERY,'{VALUE}', ISNULL(#htmlValue,NULL));
You need this for syntactically correct SQL:
SET #SQL_QUERY = REPLACE(#SQL_QUERY,'{VALUE}', ISNULL('N''' + #htmlValue + '''','NULL'));
However the correct and safe way to do this is using sp_executesql with a proper parameter as follows:
SET #SQL_QUERY = N'UPDATE #TEMP_TABLE SET BLOCK_1 = #VALUE WHERE ID = 12';
EXEC sp_executesql #SQL_QUERY, N'#VALUE nvarchar(max)', #HtmlValue;
Note: The way to debug dynamic SQL is by using the PRINT statement, which if you run on your original SQL gives:
UPDATE TEMP_TABLE SET
BLOCK_1 = иĆÊô∀⠢Ĉ!ࠀސȁސ﷿ ױ﹗£ਮշωԊӍԊӍ̳¡﹩fȒ@† ™ IJ囌垐輈č靀ミ
WHERE ID = 12
Which is clearly wrong, you need:
UPDATE TEMP_TABLE SET
BLOCK_1 = N'иĆÊô∀⠢Ĉ!ࠀސȁސ﷿ ױ﹗£ਮշωԊӍԊӍ̳¡﹩fȒ@† ™ IJ囌垐輈č靀ミ'
WHERE ID = 12
Related
Apologies if this has been asked before. I have searched for an hour and not found the exact problem I am having.
I am using SMO to run some queries against SQL Server, because I have read this can handle GO statements, as opposed to System.Data.SqlClient.
I am running this query:
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
UPDATE Program
SET ENABLED = '1'
WHERE Program_ID = '64' AND Program_Name = 'DoesSomething'
I am capturing the "rows affected" by:
int numberOfRows = db.ConnectionContext.ExecuteNonQuery(s.Query);
The problem I am having is that this returns a value of -1 every time. I am checking the database behind the scenes and it DOES update the value every time, and when I manually run the query in SSMS, I receive the (1 row affected) confirmation.
After reading some other posts I have come to think the problem may be in the first four lines of this query. I removed the GO statements, and the query returned with a value of 1 instead of -1.
Most queries/scripts my group writes has GO, SET ANSI_NULLS ON, and SET QUOTED_IDENTIFIER ON pretty much as standard so its everywhere (another problem for another day- I'm aware it is excessive/irrelevant in this case). Is this affecting my row count somehow? If so can you provide some direction for a workaround or perhaps another route to get this result?
And yes... I am aware ExecuteNonQuery should return the number of rows affected. I just need to know why it is not returning what I think it should (in this case 1).
GO is a the batch seaparator used by SQLCMD, OSQL and other tools. Since SMO's ServerConnection.ExecuteNonQuery happens to recognize the SQLCMD commands, it handles the batch separator for you and executes every batch you issue in your text. The 'return' value of a series is not a clear command, since each batch may have its own return (not to mention that every batch may contain multiple statements). So I would take that 'return' value with a grain of salt. If you want to confirm the number of rows updated, use an OUTPUT clause in the UPDATE and have a result set, or assign ##ROWCOUNT to an output variable.
So I have a table with a column of type VARCHAR (100) and I'm wondering if there's a way to configure SQL Server 2012 (T-SQL) so that if a transaction tries to submit a string of 101+ characters then it takes the first 100.
Is this possible, or should I be doing the truncation in the C# side of things ???
Normally, SQL Server will present an error on any attempt to insert more data into a field than it can hold
String or binary data would be truncated. The statement has been terminated.
SQL Server will not permit a silent truncation of data just because the column is too small to accept the data. But there are other ways that SQL Server can truncate data that is about to be inserted into a table that will not generate any form of error or warning.
By default, ANSI_WARNINGS are turned on, and certain activities such as creating indexes on computed columns or indexed views require that they be turned on. But if they are turned off, SQL Server will truncate the data as needed to make it fit into the column. The ANSI_WARNINGS setting for a session can be controlled by
SET ANSI_WARNINGS { ON|OFF }
Unlike with an insert into a table, SQL Server will quietly cut off data that is being assigned to a variable, regardless of the status of ANSI_WARNINGS. For instance:
declare #smallString varchar(5)
declare #testint int
set #smallString = 'This is a long string'
set #testint = 123.456
print #smallString
print #testint
Results is:
This
123
This can occasionally show itself in subtle ways since passing a value into a stored procedure or function assigns it to the parameter variables and will quietly do a conversion. One method that can help guard against this situation is to give any parameter that will be directly inserted into a table a larger datatype than the target column so that SQL Server will raise the error, or perhaps to then check the length of the parameter and have custom code to handle it when it is too long.
For instance, if a stored procedure will use a parameter to insert data into a table with a column that is varchar(10), make the parameter varchar(15). Then if the data that is passed in is too long for the column, it will rollback and raise a truncation error instead of silently truncating and inserting. Of course, that runs the risk of being misleading to anyone who looks at the stored procedures header information without understanding what was done.
Source: Silent Truncation of SQL Server Data Inserts
Do this on code level. When you are inserting the current field check field length and Substring it.
string a = "string with more than 100 symbols";
if(a.Length > 100)
a = a.Substring(0, 100);
After that you are adding a as sql parameter to the insert query.
The other way is to do it in the query, but again I don't advice you to do that.
INSERT INTO Table1('YourColumn') VALUES(LEFT(RTRIM(stringMoreThan100symbols), 100))
LEFT is cutting the string and RTRIM is performing Trim operation of the string.
My suggestion would be to make the application side responsible for validating the input before calling any DB operation.
SQL Server silently truncates any varchars you specify as stored procedure parameters to the length of the varchar. So you should try considering stored procedures for you requirements. So it will get handled automatically.
If you have entity classes (not necessarily from EF) you can use StringLength(your field length) attribute to do this.
Let's say a have a stored procedure SetCustomerName which has an input parameter Name, and I have a table customers with column Name.
So inside my stored procedure I want to set customer's name. If I write
UPDATE customers SET Name = Name;
this is incorrect and I see 2 other ways:
UPDATE customers SET Name = `Name`;
UPDATE customers SET customers.Name = Name;
First one works, but I didn't find in documentation that I can wrap parameters inside ` characters. Or did I miss it in the documentation (link is appreciated in this case).
What other ways are there and what is the standard way for such a case? Renaming input parameter is not good for me (because I have automatic object-relational mapping if you know what I mean).
UPDATE:
So, there is a link about backticks (http://dev.mysql.com/doc/refman/5.0/en/identifiers.html) but it's not explained deep enough how to use them (how to use them with parameters and column names).
And there is a very strange thing (at least for me): You can use backticks either way:
UPDATE customers SET Name = `Name`;
//or
UPDATE customers SET `Name` = Name;
//or even
UPDATE customers SET `Name` = `Name`;
and they all work absolutely the same way.
Don't you think this is strange? Is this strange behavior explained somewhere?
Simplest way to distinguished between your parameter and column (if both name is same) is to add table name in your column name.
UPDATE customers SET customers.Name = Name;
Even you can also add database prefix like
UPDATE yourdb.customers SET yourdb.customers.Name = Name;
By adding database name you can perform action on more than 1 database from single store procedure.
I think that your first example is actually backwards. If you're trying to set the "Name" column to the "Name" input parameter, I believe it should be:
UPDATE customers SET `Name` = Name;
And for the second example, you can set table aliases the same way that you do in all other statements:
UPDATE customers AS c SET c.Name = Name;
Not necessarily correct, but a fair way to better argument/parameter management, as well readability with easier understanding, especially while working with the SQL;
DROP PROCEDURE IF EXISTS spTerminalDataDailyStatistics; DELIMITER $$
CREATE PROCEDURE spTerminalDataDailyStatistics(
IN TimeFrom DATETIME,
IN DayCount INT(10),
IN CustomerID BIGINT(20)
) COMMENT 'Daily Terminal data statistics in a date range' BEGIN
# Validate argument
SET #TimeFrom := IF(TimeFrom IS NULL, DATE_FORMAT(NOW(), '%Y-%m-01 00:00:00'), TimeFrom);
SET #DayCount := IF(DayCount IS NULL, 5, DayCount);
SET #CustomerID := CustomerID;
# Determine parameter
SET #TimeTo = DATE_ADD(DATE_ADD(#TimeFrom, INTERVAL #DayCount DAY), INTERVAL -1 SECOND);
# Do the job
SELECT DATE_FORMAT(TD.TerminalDataTime, '%Y-%m-%d') AS DataPeriod,
COUNT(0) AS DataCount,
MIN(TD.TerminalDataTime) AS Earliest,
MAX(TD.TerminalDataTime) AS Latest
FROM pnl_terminaldata AS TD
WHERE TD.TerminalDataTime BETWEEN #TimeFrom AND #TimeTo
AND (#CustomerID IS NULL OR TD.CustomerID = #CustomerID)
GROUP BY DataPeriod
ORDER BY DataPeriod ASC;
END $$
DELIMITER ;
CALL spTerminalDataDailyStatistics('2021-12-01', 2, 1801);
Using backticks in MySQL query syntax is documented here:
http://dev.mysql.com/doc/refman/5.0/en/identifiers.html
So yes, your first example (using backticks) is correct.
Here is the link you are asking for:
http://dev.mysql.com/doc/refman/5.0/en/identifiers.html
The backticks are called "identifier quote" in MySql
I have a row that contains a field defined as varchar(MAX). I'm confused about the limit of the field: in some places, I read that varchar(MAX) has a size limit of 8K and in other places it seems that the limit is 2GB.
I have a string that I want to save to a database; it's about 220K. I'm using linq-to-sql and when the write query submits to the database, the row gets written without any exceptions generated. However, when I open the database table in SSMS, the cell that should contain the long string is empty. Why is that and how do I take advantage of the 2GB limit that I read about?
This is the property in the linq-to-sql model:
All MAX datatypes--VARCHAR(MAX), NVARCHAR(MAX), and VARBINARY(MAX)--have a limit of 2 GB. There is nothing special you need to do. Without specifying MAX, the limit for VARCHAR and VARBINARY are 8000 and the limit for NVARCHAR is 4000 (due to NVARCHAR being double-byte). If you are not seeing any data come in at all, then something else is going on.
Are you sure that the column is even in the INSERT statement? If you submit test data of only 20 characters, does that get written? If you want to see what SQL is actually submitted by Linq, try running SQL Profiler and look at the SQL Statement: Statement Ended event, I believe.
Also, when you say that the "long string is empty", do you mean an actual empty string or do you mean NULL? If it is not NULL, you can also wrap the field in a LEN() function to see if there are blanks for returns at the beginning that push any non-whitespace characters out of view. Meaning, SELECT LEN(stringField), * FROM Table. Another thing to try is to use "Results to Text" instead of "Results to Grid" (this is a Query option).
EDIT:
Seeing that the field is marked as NOT NULL, are you sure that you are setting the ClientFileJS property of your object correctly? Is it possible that the empty string is due to that property being initialized as string ClientFileJS = ""; and is never updated?
I'm having trouble doing case insensitive string comparison using code first against an Oracle db. Code looks something like this;
String filter = "Ali";
var employee = dbContext.Employees.Where(x => x.Name.Contains(filter)).FirstOrDefault();
The code above acts to be case sensitive. So I converted both the Name and filter to Uppercase;
String filter = "Ali";
filter = filter.ToUpper();
var employee = dbContext.Employees.Where(x => x.Name.ToUpper().Contains(filter)).FirstOrDefault();
Everything seemed to work at first, but then I realized it's not working when the employee's name or the filter contains the character 'i'. The problem is how the letter i works in Turkish.
In most languages, 'i' stands for the lowercase, and 'I' stands for the uppercase version of the character. However in Turkish, 'i's uppercase is 'İ', and 'I's lowercase is 'ı'. Which is a problem as Oracle uppercases the letter 'i' in the db as 'I'.
We do not have access to the db's character encoding settings as its effects cannot be foreseen easily.
What I've come up with is this, and it is very ugly.
String filterInvariant = filter.ToUpper(CultureInfo.InvariantCulture);
String filterTurkish = filter.ToUpper(CultureInfo.CreateSpecificCulture("tr-TR"));
var employee = dbContext.Employees.Where(x => x.Name.ToUpper().Contains(filterInvariant) || x.Name.ToUpper().Contains(filterTurkish)).FirstOrDefault();
It seems to fix some of the issues, but feels like a brute force workaround rather than a solid solution. What are the best practices, or alternatives to this workaround, while using Code First C# against an Oracle database?
Thanks in advance
Ditch using all the UPPER functions. Simply let Oracle do your language aware case-insensitive matching. This is done by setting your DB connection from C# to have the appropriate language parameters. This setting is just for your DB session, not a global change for the whole DB. I'm no C# wizard, so you'd have to figure out where to make these session settings in your db connection/pool code.
ALTER SESSION SET nls_language=TURKISH;
ALTER SESSION SET nls_comp=LINGUISTIC;
ALTER SESSION SET nls_sort=BINARY_CI;
If C# proves too difficult to find where to change this, you could set this up as a user/schema logon trigger (below), which sets these automatically for you at db connect time (replace SOMEUSER with your actual db username). This only affects any NEW db sessions, so if you have connections pooled, you'll want to cycle the DB connection pool to refresh the connections.
CREATE OR REPLACE TRIGGER SOMEUSER.SET_NLS_CASE_INSENSITIVE_TRG AFTER
LOGON ON SOMEUSER.SCHEMA
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION SET nls_language=TURKISH';
EXECUTE IMMEDIATE 'ALTER SESSION SET nls_comp=LINGUISTIC';
EXECUTE IMMEDIATE 'ALTER SESSION SET nls_sort=BINARY_CI';
END;
/
Here's a little test I did in an Oracle DB:
CREATE TABLE mypeople (name VARCHAR2(10 CHAR));
INSERT INTO mypeople VALUES ('Alİ Hassan');
INSERT INTO mypeople VALUES ('AlI Hassan');
INSERT INTO mypeople VALUES ('Ali Hassan');
INSERT INTO mypeople VALUES ('Alı Hassan');
SELECT name FROM mypeople WHERE name LIKE 'Ali%';
NAME
----------
Ali Hassan
ALTER SESSION SET nls_language=TURKISH;
ALTER SESSION SET nls_comp=LINGUISTIC;
ALTER SESSION SET nls_sort=BINARY_CI;
SELECT name FROM mypeople WHERE name LIKE 'Ali%';
NAME
----------
Alİ Hassan
AlI Hassan
Ali Hassan
The implementation of String.Contains is different for different providers, for example Linq2Sql is always case insensitive. The search is case sensitive or not depends on server settings. For example SQL Server by default has SQL_Latin1_General_CP1_CI_AS Collation and that is NOT case sensitive. For Oracle you can change this behavior at the session level: Case insensitive searching in Oracle (Issue a raw SQL query using context.Database.ExecuteSqlCommand method at the beginning of the session)
The problem is in the database, not in .NET, for example this query:
FILES.Where(t => t.FILE_NAME.ToUpper() == "FILE.TXT") // Get rows from file-table
translates into this Oracle SQL with the oracle provider I have:
SELECT t0.BINARY_FILE, t0.FILE_NAME, t0.FILE_SIZE, t0.INFO, t0.UPLOAD_DATE
FROM FILES t0
WHERE (UPPER(t0.FILE_NAME) = :p0)
-- p0 = [FILE.TXT]
The contains with First() becomes this:
SELECT * FROM (SELECT t0.BINARY_FILE, t0.FILE_NAME, t0.FILE_SIZE, t0.INFO, t0.UPLOAD_DATE
FROM FILES t0
WHERE ((UPPER(t0.FILE_NAME) LIKE '%' || :p0 || '%')
OR (UPPER(t0.FILE_NAME) LIKE '%' || :p1 || '%')))
WHERE ROWNUM<=1
-- p0 = [FILE.TXT]
-- p1 = [FİLE.TXT]
So it depends on your database's culture settings, ie without knowing them I would say the "overlap" with your solution is the best way to solve it. Why can't you just check the database culture settings?