I just realized if your C# application use LINQ-TO-SQL classes to interface with the database, you can your query like this
using (DatabaseContext context = new DatabaseContext())
{
context.Log = Console.Out;
var query = from Person p in context.People
where person.Name == "john"
select p;
Console.WriteLine(query.Name);
}
What is the equivalent in LINQ-TO-ENTITY (is this another name for ADO.NET?) for
context.Log = Console.Out
Or there is another way to see your actual SQL query to the database?
I always use SQL Profiler if you have MS SQL Server. What DBMS is this for? LINQ 2 Entities supports multiple DB types.
This also works...
var cust = (from c in context.Customers select c);
string sql = ((ObjectQuery)cust).ToTraceString();
From MSDN forums
Using EntityFrame 6 it is possible to just to a ToString() on your query at least when using MySQL
var cust = (from c in context.Customers select c);
string sql = cust.ToString();
As Greg in the comments notes, this gives you the parameterized query, so you will need to add in the values you want to use.
You can trace your SQL when using Linq2Entities
https://stackoverflow.com/questions/137712/sql-tracing-linq-to-entities
You may also want to look at this tool
Huagati Query Profiler
I believe the Tabular Data Stream (TDS) protocol used by Microsoft SQL Server sends commands and responses in plain text by default so unless you encrypt the connection between your SQL Server and the client, you should be able to view both the request and response with a comprehensive packet sniffer.
It will take some work but using a packet sniffer in this manner should allow you to see what T-SQL your LINQ is getting translated to.
Side Notes:
I recommend that you encrypt all communications between your client and SQL server unless both the client and server reside on the same machine and you are doing development testing.
If you can't risk using a decyrpted connection for testing purposes, your packet sniffer may have a plugin that will allow you to decrypt encrypted traffic, but I am not sure if there are any risks in using such a decryption plugin.
EF doesn't have a direct parallel to the stream based loging that LINQ to SQL uses. There are a number of profiling options available for a variety of costs. I've discussed some of these options at http://www.thinqlinq.com/Post.aspx/Title/LINQ-to-Database-Performance-hints. You can find a listing of these profilers and other LINQ tools at http://www.thinqlinq.com/Post.aspx/Title/linq-tools.
Related
It is changing my queries and appears to be fully qualifying my tables without me explicitly telling it to. Is there a way to stop it from doing that?
Here is the pertinent information as I see it. Let me know if anything else would be helpful.
We had a SQL Server named serverName. It's been in production for years. It was migrated away from a Windows 2008 Server to a Windows 2012 Server. The new server's name is sql_1234_4321 (not the real name but as terrible)
We have nth number of applications that were hitting the old serverName SQL Server so we took the old server offline and created a DNS entry for serverName that points at the new sql_1234_4321 hoping we wouldn't have to hit the connection strings for all the apps that were hitting the old server.
This worked for the most part except for some C# ASP.NET MVC apps.
They are using System.Data.SqlClient.SqlCommand.
Connection string:
Data Source=serverName;Initial Catalog=USData; Persist Security Info=True; User ID=appUn;Password=appPw
SQL query:
select FirstName from Customers
Code:
using (SqlCommand cmd = new SqlCommand(query, sqlConnection))
{
if (parameters != null)
{
cmd.Parameters.AddRange(parameters.ToArray());
}
var reader = cmd.ExecuteReader();
var results = new List<TType>();
while (reader.Read())
{
results.Add(convert(reader));
}
return results;
}
I get an error:
Could not find server 'serverName' in sys.servers.Verify that the correct server name was specified. If necessary, execute the stored procedure sp_addlinkedserver to add the server to sys.servers.
Why this error? The only time serverName is referenced is in the connection string. My query should just use default namespaces once its on the server. But it appears that my query is being fully qualified at some point in the process as the following:
select FirstName from serverName.USData.dbo.Customers
I added a linked server serverName on the new sql_1234_4321 server that just points back to itself and this seemed to fix the problem. However, this feels absolutely dirty and makes me wonder if it REALLY is doing a cross server query at that point or if its smart enough to say "HEY! we are hitting ourself so don't worry about going out to the network and making this more expensive than it should be" but i doubt it.
I thought about using synonyms but the problem is we have tables with the server name in them. And there may be queries hitting the server with the server name in them so the following would not work:
CREATE SYNONYM serverName FOR sql_1234_4321;
So then it would make sense that I'd have to make a specific synonym for each database on the server:
CREATE SYNONYM serverName.database1 FOR sql_1234_4321.database1;
CREATE SYNONYM serverName.database2 FOR sql_1234_4321.database2;
CREATE SYNONYM serverName.database3 FOR sql_1234_4321.database3;
CREATE SYNONYM serverName.database4 FOR sql_1234_4321.database4;
CREATE SYNONYM serverName.database5 FOR sql_1234_4321.database5;
CREATE SYNONYM serverName.database6 FOR sql_1234_4321.database6;
CREATE SYNONYM serverName.database7 FOR sql_1234_4321.database7;
CREATE SYNONYM serverName.database8 FOR sql_1234_4321.database8;
CREATE SYNONYM serverName.database9 FOR sql_1234_4321.database9;
CREATE SYNONYM serverName.database10 FOR sql_1234_4321.database10;
As you can see, this would be a nightmare to maintain and besides that feels super dirty.
My question is this... At what point is the table name being fully qualified out based on the connection string? Is there a way to prevent that from happening?
David Browne led me to find the issue. My query actually had a view referenced and the view had the reference to the old server. Huge oversight on my part to not notice that. Thanks David
I have recently been changing some C# programs to add proper parameterizing to some MySQL statements that had originally been written with concatenated strings. Invariably, I've run into some problems with my statements and I can't find a way to directly see the complete MySQL statement with parameters applied other than this workaround that I have where I pass the MySQL command to this:
private string getMySqlStatement(MySqlCommand cmd)
{
string result = cmd.CommandText.ToString();
foreach (MySqlParameter p in cmd.Parameters)
{
string addQuote = (p.Value is string) ? "'" : "";
result = result.Replace(p.ParameterName.ToString(), addQuote + p.Value.ToString() + addQuote);
}
return result;
}
This works, but I was wondering if there was a more proper way to see the full statement with parameters applied. Reading up on this, it looks like the parameters aren't actually applied until it reaches the server - is this correct? In that case, I suppose I can stick to my function above, but I just wanted to know if there was a better way to do it.
Note: I am just using this function for debugging purposes so I can see the MySQL statement.
MySQL supports two protocols for client/server communication: text and binary. In the text protocol, there is no support for command parameters in the protocol itself; they are simulated by the client library. With Connector/NET, the text protocol is always used, unless you set IgnorePrepare=true in the connection string and call MySqlCommand.Prepare() for each command. So it's most likely the case that you are using the text protocol. This is good, because it will be easier to log the actual statements with parameters applied.
There are three ways to view the statements being executed:
Use Connector/NET Logging
Add Logging=true to your connection string and create a TraceListener that listens for the QueryOpened event. This should contain the full SQL statement with parameters interpolated. Instructions on setting this up are here.
Use MySQL Server Logging
Enable the general query log on your server to see all queries that are being executed. This is done with the --general_log=1 --general_log_file=/var/path/to/file server options.
Packet Sniffing
If you're not using SslMode=Required (to encrypt the connection between client and server), then you can use WireShark to capture network traffic between your client and the server. WireShark has MySQL Protocol analysers that will inspect MySQL traffic and identify command packets (that contain SQL queries). This option is ideal if you aren't able to modify your client program nor change server logging settings.
My setup is similar to this for testing dapper calls for SQL Server using in-memory SQLite (http://mikhail.io/2016/02/unit-testing-dapper-repositories/) using this lib: https://github.com/ServiceStack/ServiceStack.OrmLite
I'm using dapper with ad hoc SQL for my DAL and wanted to test data access layer without dependency on SQL Server. I used SQLite in-memory database. Problem is SQL syntax are different between SQL Server and SQLite.
For example I have a query that returns paged results using offset and fetch next, but SQLite only supports limit and offset.
What if any suggestions you have for me to do my in memory unit test? I didn't go the EF route with mocked db context as dapper is more performant and didn't want to use stored procedures as I wanted to test my SQL as well. I'm not looking to mock my database calls.
Ormlite's Typed API is RDBMS agnostic so as long as you stick to OrmLite's Typed API you will be easily able to alternate between different databases by just changing the connection string and dialect provider, e.g:
//SQL Server
var dbFactory = new OrmLiteConnectionFactory(connectionString,
SqlServerDialect.Provider);
//InMemory Sqlite DB
var dbFactory = new OrmLiteConnectionFactory(":memory:",
SqliteDialect.Provider);
Then you can use either database to create, persist and query POCO's, e.g:
using (var db = dbFactory.Open())
{
db.DropAndCreateTable<Poco>();
db.Insert(new Poco { Name = name });
var results = db.Select<Poco>(x => x.Name == name);
results.PrintDump();
}
But if use the Custom SQL API's to execute MSSQL-specific SQL you wont be able to execute that against SQLite. You can make use of the mockable support in OrmLite, but I'd personally recommend sticking to OrmLite's RDBMS agnostic typed API's instead.
I am developing a line of business application which has to, for reasons out of my control, use a client server architecture.
I.e. clients all connect to an application server, application server connects to database etc.
To do this in the past I have created a WCF service which exposes CRUD type methods for the database. Methods like this, exist in WCF:
Customer GetCustomer(int customerId);
List<Customer> GetAllCustomers();
etc...
However I have always found the same 2 problems with this:
1) There's a LOT of plumbing code which connects: client -> app server -> db server
2) When client applications need to grab more complex data, I end up having to add methods on the server side which end up something horrible like this:
Customer GetCustomerByNameWhereCustomerHasBoughtProduct(string name, int productCode);
OR
Or returning way more data than need and processing on the client side. Which is slow and really bad for the database. Something like:
List<Customer> customers = _Service.GetAllCustomers();
List<Product> products = _Service.GetAllProducts();
List<Customer> customersWhoBoughtX = (from c in customers where
c.OrderLog.Contains(products.Where(p => p.Code == x)
select c).ToList()
What am I doing wrong here because this must be solvable some way.
Is there a way to expose a database through a wcf service using conventions? Or any other idea that could help what I'm doing?
Ideally I would say the clients could connect to the database directly, however I am told this is an issue which can't be changed.
I would really appreciate some pointers.
Thanks
Consider exposing your entities using OData. Then on the client you can write LINQ queries in a way similar to writing EF LINQ queries. Here's an article with the details:
http://www.vistadb.net/tutorials/entityframework-odata-wcf.aspx
I have one database with one mirror in high-safety mode (using a witness server at the moment but planing to take him out), this database will be used to store data gathered by a c# program.
I want to know how can I check in my program the state of all the SQL instances and to cause/force a manual failover.
is there any c# API to help me with this?
info: im using sql server 2008
edit: I know I can query sys.database_mirroring but for this I need the principal database up and runing, I would like to contact each sql instance and check their status.
Use SQL Server Management Objects (SMO).
SQL Server Management Objects (SMO) is a collection of objects that are designed for programming all aspects of managing Microsoft SQL Server. SQL Server Replication Management Objects (RMO) is a collection of objects that encapsulates SQL Server replication management.
I have used SMO in managed applications before - works a treat.
To find out the state of an instance, use the Server object - is has a State and a Status properties.
after playing around a bit I found this solution (i'm not if this is a proper solution, so leave comments plz)
using Microsoft.SqlServer.Management.Smo.Wmi;
ManagedComputer mc = new ManagedComputer("localhost");
foreach (Service svc in mc.Services) {
if (svc.Name == "MSSQL$SQLEXPRESS"){
textSTW.Text = svc.ServiceState.ToString();
}
if (svc.Name == "MSSQL$TESTSERVER"){
textST1.Text = svc.ServiceState.ToString();
}
if (svc.Name == "MSSQL$TESTSERVER3") {
textST2.Text = svc.ServiceState.ToString();
}
}
this way i'm just looking for the state of the services (Running/Stoped) and is much faster, am I missing something?