enterprise library dbcommand AddInParameter method for Oracle - c#

My question is very similar to this question - Addin parameter for Oracle except that i'm using Oracle 11g. The datbase has two different charactersets for VARCHAR(Western European) and NVARCHAR(Unicode) datatypes.
db.AddInParameter(cmd, "nationalColumn", DbType.String, "高野")
National characterset in the database is unicode and so NVARCHAR columns are able to hold these characters.
My question is how do i tell db.AddInParameter function that the parameter i'm adding is a NVARCHAR and not a VARCHAR which it seems to be assuming by default.
Adding to this - I'm using System.Data.OracleClient to connect to the database

You can't encode Chinese characters in the Western Europe encoding. This encoding has a limited number of characters defined and they don't include Chinese.
What output did you expect? I'd expect either the data to be garbled or an error to be returned.

Are you specifying the parameters yourself or are you letting Enterprise Library try to find out the parameters type?
If you are calling
command.AddInParameter(“parameterName”, value);
try calling the procedure and let Enterprise Library find out the parameters, like this example:
DB.ExecuteNonQuery(“PKG_USER.DELETE”, userId);
The procedure expects an INT parameter called P_ID, but the command only pass in the parameter value. EntLib uses the parameters order to send them.
Also take a look at
this other post that I wrote: http://devstuffs.wordpress.com/2012/03/13/enterprise-library-5-with-odp-net/
this sample code: https://github.com/stanleystl/EntLib5ODP.NET
and this stack overflow answer: C#/Oracle: Specify Encoding/Character Set of Query?

I'm answering my own question as i spent a week figuring it out. The System.Data namespace doesn't understand that the column it's handling is a national column unless it is specified explicitly as OracleType.Nvarchar.
To resolve the situation above, I had to override the AddInParameter function to add a switch case that checks if the input DBType is a String and map it to a OracleType.NVarchar.
HTH

Related

ExecuteSqlInterpolated/ExecuteSqlRaw parameters pass to database incorrectly in asp.net core

I am attempting to use ExecuteSqlInterpolated to update my database. However, it seems that there is a problem with my SQL parameters. Running the following code correctly updates intField1 and intField2, however stringField becomes the string "#p0".
_context.Database.ExecuteSqlInterpolated($"UPDATE table SET stringField='{someString}', intField1={someInt}, intField2={someOtherInt} WHERE id='{id}'");
I have already verified that my variables contain the desired values when the string is passed to the method. I understand that #p0 is what SQL uses to represent the first parameter in the query, but why isn't it being replaced by the string I gave it? I have also tried using ExecuteSqlRaw but ran into the same issue. My knowledge of SQL is limited at best, I know just enough to get by in web dev, so I'm guessing I'm committing some simple error in crafting the query here, but obviously I'm not sure.
I know it's late but just don't use quotation mark with your parameters, specially where you use int data-type
(delete single quotes)
_context.Database.ExecuteSqlInterpolated($"UPDATE table SET stringField={someString}, intField1={someInt}, intField2={someOtherInt} WHERE id={id}");

Why are `nvarchar` parameters faster than other types for 'text' `SqlCommand` commands?

Overview
This question is a more specific version of this one:
sql server - performance hit when passing argument of C# type Int64 into T-SQL bigint stored procedure parameter
But I've noticed the same performance hit for other data types (and, in fact, in my case I'm not using any bigint types at all).
Here are some other questions that seem like they should cover the answer to this question, but I'm observing the opposite of what they indicate:
c# - When should "SqlDbType" and "size" be used when adding SqlCommand Parameters? - Stack Overflow
.net - What's the best method to pass parameters to SQLCommand? - Stack Overflow
Context
I've got some C# code for inserting data into a table. The code is itself data-driven in that some other data specifies the target table into which the data should be inserted. So, tho I could use dynamic SQL in a stored procedure, I've opted to generate dynamic SQL in my C# application.
The command text is always the same for row I insert so I generate it once, before inserting any rows. The command text is of the form:
INSERT SomeSchema.TargetTable ( Column1, Column2, Column3, ... )
VALUES ( SomeConstant, #p0, #p1, ... );
For each insert, I create an array of SqlParameter objects.
For the 'nvarchar' behavior, I'm just using the SqlParameter(string parameterName, object value) constructor method, and not setting any other properties explicitly.
For the 'degenerate' behavior, I was using the SqlParameter(string parameterName, SqlDbType dbType) constructor method and also setting the Size, Precision, and Scale properties as appropriate.
For both versions of the code, the value either passed to the constructor method or separately assigned to the Value property has a type of object.
The 'nvarchar' version of the code takes about 1-1.5 minutes. The 'degenerate' or 'type-specific' code takes longer than 9 minutes; so 6-9 times slower.
SQL Server Profiler doesn't reveal any obvious culprits. The type-specific code is generating what would seem like better SQL, i.e. a dynamic SQL command whose parameters contain the appropriate data type and type info.
Hypothesis
I suspect that, because I'm passing an object type value as the parameter value, the ADO.NET SQL Server client code is casting, converting, or otherwise validating the value before generating and sending the command to SQL Server. I'm surprised tho that the conversion from nvarchar to each of the relevant target table column types that SQL Server must be performing is so much faster than whatever the client code is doing.
Notes
I'm aware that SqlBulkCopy is probably the best-performing option for inserting large numbers of rows but I'm more curious why the 'nvarchar' case out-performs the 'type-specific' case, and my current code is fast enough as-is given the amount of data it routinely handles.
The answer does depend on the database you are running, but it has to do with the character encoding process. SQL Server introduced the NVarChar and NText field types to handle UTF encoded data. UTF also happens to be the internal string representation for the .NET CLR. NVarChar and NText don't have to be converted to another character encoding, which takes a very short but measurable amount of time.
Other databases allow you to define character encoding at the database level, and others let you define it on a column by column basis. The performance differences really depend on the driver.
Also important to note:
Inserting using a prepared statement emphasizes inefficiencies in converting to the database's internal format
This has no bearing on how efficient the database queries against a string--UTF-16 takes up more space than the default Windows-1252 encoding for Text and VarChar.
Of course, in a global application, UTF support is necessary
They're Not (but They're Almost as Fast)
My original discrepancy was entirely my fault. The way I was creating the SqlParameter objects for the 'degenerate' or 'type-specific' version of the code used an extra loop than the 'nvarchar' version of the code. Once I rewrote the type-specific code to use the same number of loops (one), the performance is almost the same. [About 1–2% slower now instead of 500-800% slower.]
A slightly modified version of the type-specific code is now a little faster; at least based on my (limited) testing – about 3-4% faster for ~37,000 command executions.
But it's still (a little) surprising that it's not even faster, as I'd expect SQL Server converting hundreds of nvarchar values to lots of other data types (for every execution) to be significantly slower than the C# code to add type info to the parameter objects. I'm guessing it's really hard to observe much difference because the time for SQL Server to convert the parameter values is fairly small relative to the time for all of the other code (including the SQL client code communicating with SQL Server).
One lesson I hope to remember is that it's very important to compare like with like.
Another seeming lesson is that SQL Server is pretty fast at converting text to its various other data types.

c# mysql AddWithValue unicode

I'm working with C# and MySQL now. I've tried to search around the internet for day to find out why I can't use AddWithValue method to add unicode characters because when I manually add it in MySQL, it works! But back in the C# code with MySQL connector for .NET it doesn't work. Other than the unicode characters is fine.
cmd.CommandText = "INSERT INTO tb_osm VALUES (#id, #timestamp, #user)";
cmd.Parameters.AddWithValue("#id", osmobj.ID);
cmd.Parameters.AddWithValue("#timestamp", osmobj.TimeStamp);
cmd.Parameters.AddWithValue("#user", osmobj.User);
cmd.ExecuteNonQuery();
For example: osmbj.User = "ສະບາຍດີ", it will be "???????" in the database.
Please T^T
does this link help you?
read/write unicode data in MySql
Basically it says, you should append your connection string with charset=utf8;
Like so:
id=my_user;password=my_password;database=some_db123;charset=utf8;
You have to be sure that unicode characters are supported at every level of the process, all the way from the input into C# to the column stored in MySql.
The C# level is easy, because strings are already utf-16 by default. As long as you're not using some weird gui toolkit, reading from a bad file or network stream, or running in a weird console app environment with no unicode support, you'll be in good shape.
The next layer is the parameter definition. Here, you're better off avoiding the AddWithValue() method, anyway. The link pertains the Sql Server, but the same reasoning applies to MySql, even if MySql is less strict with your data than it should be. You should use an Add() override that lets you explicitly the declare the type of your parameters as NVarChar, instead of making the ADO.Net provider try to guess.
Next up is the connection between your application and the database. Here, you want to make sure to include the charset=utf8 clause (or better) as part of the connection string.
Then we need to think about the collation of the database itself. You have to be sure that an NVarChar column in MySql will be able to support your data. One of the answers from the question at previous link also covers how to handle this.
Finally, make sure the column is defined with the NVarChar type, instead of just VarChar.
Yes, utf8 at all stages -- byte-encoding in client, conversion on the wire (charset=utf8), and on the column. I do not know whether C# converts from utf16 to utf8 before exposing the characters; if it does not, then charset=utf16 (or no setting) may be the correct step.
Because you got multiple ?, the likely cause is trying to transform non-latin1 characters into a CHARACTER SET latin1 column. Since latin1 has no codes for Lao, ? was substituted. Probably you said nothing about the column, but depended on the DEFAULT on the table and/or database, which happened to be latin1.
The ສະບາຍດີ is lost and cannot be recovered from ???????.
Once you have changed things, check that it is stored correctly by doing SELECT col, HEX(col) .... For the string ສະບາຍດີ, you should get hex E0BAAAE0BAB0E0BA9AE0BAB2E0BA8DE0BA94E0BAB5. Notice how that is groups of E0BAxx, which is the range of utf8 values for Lao.
If you still have troubles, please provide the HEX for further analysis.

access mdb date/time issue

I am trying to fetch the record of 3rd june of 2013 from my database which is made in ms access. Dates are stored in the format of dd/MM/yyyy, below is my query
AND (a.Date = #" + date + "#) ) order by e.E_ID asc
But the amazing thing is i have inserted a record on date of 03/06/2013 which is todays date, while it takes it as 6th march 2013, i have corrected my regional settings, still the same issue. Also in my query i am query for matching date i am using dd/MM/yyyy. Is this a bug from microsoft? please help
Dates are stored in the format of dd/MM/yyyy
I suspect they're not. I suspect they're stored in some native date/time format which is doubtless much more efficient than a 10 character string. (I'm assuming you're using an appropriate field type rather than varchar, for example.) It's important to differentiate between the inherent nature of the data and "how it gets displayed when converted to text".
But the amazing thing
I don't see this as amazing. I see it as a perfectly natural result of using string conversions unnecessarily. They almost always bite you in the end. You're not trying to represent a string - you're trying to represent a date. So use that type as far as you possibly can.
You should:
Use parameterized SQL for queries for many reasons - most importantly to avoid SQL injection attacks, but also to avoid unneccessary string conversions of this kind
Specify the parameter value as a DateTime, thus avoiding the string conversion
You haven't specified which provider type you're using - my guess is OleDbConnection etc. Generally if you look at the documentation for the Parameters property of the relevant command class, you'll find an appropriate example. For example, OleDbCommand.Parameters shows a parameterized query on an OleDbConnection. One thing worth noting from the docs:
The OLE DB .NET Provider does not support named parameters for passing parameters to an SQL statement or a stored procedure called by an OleDbCommand when CommandType is set to Text. In this case, the question mark (?) placeholder must be used. [...]
Therefore, the order in which OleDbParameter objects are added to the OleDbParameterCollection must directly correspond to the position of the question mark placeholder for the parameter in the command text.

Using a ref cursor as input type with ODP.NET

I'm trying to use a RefCursor as an input parameter on an Oracle stored procedure. The idea is to select a group of records, feed them into the stored procedure and then the SP loops over the input RefCursor, doing some operations to its records. No, I can't select the records inside the SP and thus avoid having to use the RefCursor as an input type.
I've found an example on how to do this on (this here would be the link, but it seems I cannot use them yet) Oracle's documentation, but it uses a simple SELECT to populate the input RefCursor; and therein lies the rub: I've got to populate it from code.
You see, in code I have this:
[OracleDataParameter("P_INPUT", OracleDbType.RefCursor, ParameterDirection.Input)]
private List<MiObject> cursor;
And, I've tried populating cursor with a List<T>, a DataTable, even an plain array of MyObject, and nothing works. When I try running my tests I get an error:
"Invalid Parameter Linking"
Maybe not the exact wording, as I'm translating from Spanish, but that's the message
Any ideas?
I'm also in contact with Mark Williams, the author of the article I've tried to link on my post, and he has kinly responded like this:
"
It is no problem to send me email; however, I think I will disappoint you with my answer on this one.
Unfortunately you can't do what you are trying to do (create a refcursor from the client like that).
A couple of problems with that are that a refcursor refers to memory owned by Oracle on the server and Oracle has no concept of client items like a DataTable or a .NET List, etc.
Do you have any other options available other than using a refcursor?
"
So basically I'm screwed, and this question is closed. Thanks for reading and/or trying to help, you all.
From memory, isn't there an OracleCursor class somewhere in the ODP.NET library that works?
Look at this sample for refcursor as input to pl/sql from oracle technet.
The clou is that the input refcursor object must be created by oracle themself. You cannot convert a list or anything else to refcursor.

Categories