Dapper incorrectly thinking Column Numeric - c#

I have been using dapper for a while but have come across a strange issue.
I have a Column in a database table called , in my query my Table
my dapper query is
SELECT p.Id PaymentId, p.AmountPaid PaymentAmountPaid, np.NoticeNo PaymentRef
In the Database the column NoticeNo is defined as nchar
When I query it, its mapped to class with the following property
public string PaymentRef { get; set; }
I query as follows connection.Query<PaymentSummary>(sqlStr)
but very strange, dapper seems to think that the NoticeNo column is a number, so if the store notice no as 1234 then when its queried the result is 1234.00
Most of the notice numbers are numeric, but they don't have to be.
Any ideas?
Thanks ash.

Turns out it wasn't Dapper's Fault. The query was part of a union, and I had the notice no and the amount column in a different order in two if the queries, which was forcing the result to be a Number!!

Related

sql query treating a int as a string - issues?

If i do a query like this
SELECT * from Foo where Bar = '42'
and Bar is a int column. Will that string value be optimized to 42 in the db engine? Will it have some kind of impact if i leave it as it is instead of changing it to:
Select * from Foo where Bar = 42
This is done on a SQL Compact database if that makes a difference.
I know its not the correct way to do it but it's a big pain going though all code looking at every query and DB schema to see if the column is a int type or not.
SQL Server automatically convert it to INT that because INT has higher precedence than VARCHAR.
You should also be aware of the impact that implicit conversions can
have on a query’s performance. To demonstrate what I mean, I’ve created and populated the following table in the AdventureWorks2008 database:
USE AdventureWorks2008;
IF OBJECT_ID ('ProductInfo', 'U') IS NOT NULL
DROP TABLE ProductInfo;
CREATE TABLE ProductInfo
(
ProductID NVARCHAR(10) NOT NULL PRIMARY KEY,
ProductName NVARCHAR(50) NOT NULL
);
INSERT INTO ProductInfo
SELECT ProductID, Name
FROM Production.Product;
As you can see, the table includes a primary key configured with the
NVARCHAR data type. Because the ProductID column is the primary key,
it will automatically be configured with a clustered index. Next, I
set the statistics IO to on so I can view information about disk
activity:
SET STATISTICS IO ON;
Then I run the following SELECT statement to retrieve product
information for product 350:
SELECT ProductID, ProductName
FROM ProductInfo
WHERE ProductID = 350;
Because statistics IO is turned on, my results include the following
information:
Table 'ProductInfo'. Scan count 1, logical reads 6, physical reads 0,
read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob
read-ahead reads 0.
Two important items to notice is that the query performed a scan and
that it took six logical reads to retrieve the data. Because my WHERE
clause specified a value in the primary key column as part of the
search condition, I would have expected an index seek to be performed,
rather than I scan. As the figure below confirms, the database engine performed a scan, rather than a seek. Figure below shows the details of that scan (accessed by hovering the mouse over the scan icon).
Notice that in the Predicate section, the CONVERT_IMPLICIT function is
being used to convert the values in the ProductID column in order to
compare them to the value of 350 (represented by #1) I passed into the
WHERE clause. The reason that the data is being implicitly converted
is because I passed the 350 in as an integer value, not a string
value, so SQL Server is converting all the ProductID values to
integers in order to perform the comparisons.
Because there are relatively few rows in the ProductInfo table,
performance is not much of a consideration in this instance. But if
your table contains millions of rows, you’re talking about a serious
hit on performance. The way to get around this, of course, is to pass
in the 350 argument as a string, as I’ve done in the following
example:
SELECT ProductID, ProductName
FROM ProductInfo
WHERE ProductID = '350';
Once again, the statement returns the product information and the statistics IO data, as shown in the following results:
Now the index is being properly used to locate the record. And if you
refer to Figure below, you’ll see that the values in the ProductID
column are no longer being implicitly converted before being compared
to the 350 specified in the search condition.
As this example demonstrates, you need to be aware of how performance
can be affected by implicit conversions, just like you need to be
aware of any types of implicit conversions being conducted by the
database engine. For that reason, you’ll often want to explicitly
convert your data so you can control the impact of that conversion.
You can read more about Data Conversion in SQL Server.
If you look into the MSDN chart which tells about the implicit conversion you will find that string is implicitly converted into int.
both should work in your case but the norme is to use quote anyway.
cuz if this work.
Select * from Foo where Bar = 42
this not
Select * from Foo where Bar = %42%
and this will
SELECT * from Foo where Bar = '%42%'
ps: you should anyway look at entity framework and linq query it make it simple...
If i am not mistaken, the SQL Server will read it as INT if the string will only contains number (numeric) and you're comparing it to the INTEGER column datatype, but if the string is is alphanumeric , then that is the time you will encounter an error or have an unexpected result.
My suggestion is , in WHERE clause, if you are comparing integer, do not put single quote. that is the best practice to avoid error and unexpected result.
You should use always parameters when executing sql by code, to avoid security lacks (EJ: Sql injection).

Insert row to any table with LINQ

I want to develop a dynamic insert method with LINQ.
Let's assume I have two tables like:
Product
{
id int,
name varchar(20),
price int
}
Factory
{
id int,
name varchar(50),
address varchar(240)
}
But, consider that I don't know the tables, but only the names of them.
This is how I get the column names of the table which I know the name of:
var db = new DataContext();
var columnNames = db.Mapping.MappingSource
.GetModel(typeof(DataContext))
.GetMetaType(typeof(table_name))
.DataMembers;
But I can't figure out how to get the column names of the table which I don't know the name of. What I tried so far:
context.Mapping.GetTables().FirstOrDefault(
x=> x.TableName == table_name ).Model.ContextType.Attributes;
table_name is dynamically changes and be like: Product, Factory etc.. But this way is a dead end, I think.
So, in the end I couldn't figure out how to get column names of a random table, let alone inserting a row to random table.
I can do this with classic way using SqlCommands but I want to know how to do it with LINQ.
As Mant101 said in his comment:
I don't think Linq is going to help here. You could write some code in
ADO.NET to get the columns definitions from the database, then use
some reflection to build an insert/update statement based on the
properties of the object that match the columns. I would ask why you
need to do this, are you working with some database that is in an
unknown state when the app run?
And as StriplingWarrior gives countenance to him with:
Mant101 is right: The whole purpose of an object-relational mapper is
to make it easier to work with persisted data by converting it into
objects that you can use in the programming language. Since you're not
going to program against those objects in this case, you don't get any
value from LINQ-to-SQL. You're better off bypassing the ORM and using
straight SQL/ADO.NET.
Inserting any table using generic methods with LINQ seems impossible.
However, you can do it with SQL.

How do I used Count(*) with DAL2?

I want to get counts for various groupings of data in some of my tables and am not sure if it is possible using DAL2.
I want perform queries such as:
SELECT DISTINCT productType, COUNT(*) FROM Products GROUP BY productType
The information I come across only includes examples that allow the user to specify the WHERE part of the SQL. This example unfortunately skirts right around the WHERE part of the query so I am not sure how I should approach this using DAL2. Is it possible using DAL2 or do I need to query the database another way? If it can be done using DAL2, how do I execute such a query?
The examples showing only the WHERE part mean that PetaPoco fills in the "SELECT * FROM TableName" part for you, but of course you can execute your own sql statement
In your case:
public class ProductCount {
public int ProductType {get; set;}
public int Count {get; set;}
}
var ProductCountList = db.Fetch<ProductCount>(#"SELECT DISTINCT productType,
COUNT(*) as Count
FROM Products
GROUP BY productType");
I can't tell you what is best practice. But I have a SQL server back end and use dal2 with a dnn module. I just created a view in SQL server with my grouping and joins and then mapped that view like a table (use view name instead of table name in the class annotations) with an auto increment of false. Worked for me and I get the benefit of precompiled and non dynamic queries. If you need to generate this dynamically, I am not sure what the best approach is.
I would love to hear from other members about this.

Interesting LinqToSql behaviour

We have a database table that stores the location of some wave files plus related meta data. There is a foreign key (employeeid) on the table that links to an employee table. However not all wav files relate to an employee, for these records employeeid is null. We are using LinqToSQl to access the database, the query to pull out all non employee related wav file records is as follows:
var results = from Wavs in db.WaveFiles
where Wavs.employeeid == null;
Except this returns no records, despite the fact that there are records where employeeid is null. On profiling sql server i discovered the reason no records are returned is because LinqToSQl is turning it into SQL that looks very much like:
SELECT Field1, Field2 //etc
FROM WaveFiles
WHERE 1=0
Obviously this returns no rows. However if I go into the DBML designer and remove the association and save. All of a sudden the exact same LINQ query turns into
SELECT Field1, Field2 //etc
FROM WaveFiles
WHERE EmployeeID IS NULL
I.e. if there is an association then LinqToSql assumes that all records have a value for the foreign key (even though it is nullable and the property appears as a nullable int on the WaveFile entity) and as such deliverately constructs a where clause that will return no records.
Does anyone know if there is a way to keep the association in LinqToSQL but stop this behaviour. A workaround i can think of quickly is to have a calculated field called IsSystemFile and set it to 1 if employeeid is null and 0 otherwise. However this seems like a bit of a hack to work around strange behaviour of LinqToSQl and i would rather do something in the DBML file or define something on the foreign key constraint that will prevent this behaviour.
I think you should double-check your dbml file. Sounds like Linq doesn't know that employeeid is a nullable column. Or look at your .cs file. The attributes for this column should look like this:
[Column(Storage="_employeeid", DbType="Int")]
and not:
[Column(Storage="_employeeid", DbType="Int NOT NULL")]
try this:
var results = from Wavs in db.WaveFiles
where DbNull.Value.Equals(Wavs.employeeid)
another way and good practe a nice is to introduce a default employee where every wave file is associated to, that isn´t associated to a real employee
The column is defined as:
[Column(Storage="_employeeid", DbType="Int")]
The way round it whilst leaving the association was to do a left join from the employee entity collection.

How to read the result of SELECT * from joined tables with duplicate column names in .NET

I am a PHP/MySQL developer, slowly venturing into the realm of C#/SQL Server and I am having a problem in C# when it comes to reading an SQL Server query that joins two tables.
Given the two tables:
TableA:
int:id
VARCHAR(50):name
int:b_id
TableB:
int:id
VARCHAR(50):name
And given the query
SELECT * FROM TableA,TableB WHERE TableA.b_id = TableB.id;
Now in C# I normally read query data in the following fashion:
SqlDataReader data_reader= sql_command.ExecuteReader();
data_reader["Field"];
Except in this case I need to differentiate from TableA's name column, and TableB's name column.
In PHP I would simply ask for the field "TableA.name" or "TableB.name" accordingly but when I try something like
data_reader["TableB.name"];
in C#, my code errors out.
How can fix this? And how can I read a query on multiple tables in C#?
The result set only sees the returned data/column names, not the underlying table. Change your query to something like
SELECT TableA.Name as Name_TA, TableB.Name as Name_TB from ...
Then you can refer to the fields like this:
data_reader["Name_TA"];
To those posting that it is wrong to use "SELECT *", I strongly disagree with you. There are many real world cases where a SELECT * is necessary. Your absolute statements about its "wrong" use may be leading someone astray from what is a legitimate solution.
The problem here does not lie with the use of SELECT *, but with a constraint in ADO.NET.
As the OP points out, in PHP you can index a data row via the "TABLE.COLUMN" syntax, which is also how raw SQL handles column name conflicts:
SELECT table1.ID, table2.ID FROM table1, table;
Why DataReader is not implemented this way I do not know...
That said, a solution to be used could build your SQL statement dynamically by:
querying the schema of the tables you're selecting from
build your SELECT clause by iterating through the column names in the schema
In this way you could build a query like the following without having to know what columns currently exist in the schema for the tables you're selecting from
SELECT TableA.Name as Name_TA, TableB.Name as Name_TB from ...
You could try reading the values by index (a number) rather than by key.
name = data_reader[4];
You will have to experiment to see how the numbers correspond.
Welcome to the real world. In the real world, we don't use "SELECT *". Specify which columns you want, from which tables, and with which alias, if required.
Although it is better to use a column list to remove duplicate columns, if for any reason you want *****, then just use
rdr.item("duplicate_column_name")
This will return the first column value, since the inner join will have the same values in both identical columns, so this will accomplish the task.
Ideally, you should never have duplicate column names, across a database schema. So if you can rename your schema to not have conflicting names.
That rule is for this very situation. Once you've done your join, it is just a new recordset, and generally the table names do go with it.

Categories