I am basically running a sql query through dapper but when I do some profiling on this on every query that i perform to npg sql I see an extra ExecuteScalar query that is sent on that connection. And there are multiple NpgsqlConnection.Close events. I run the query in a using statement that terminates the NpgsqlConnection as follow.
using (var connection = new NpgsqlConnection(connectionString))
{
return connection.QueryAsync<T>(sql, param);
}
The but it also runs this extra command one every sql that i send through this code -
SET extra_float_digits = 3
SET ssl_renegotiation_limit = 0
SET lc_monetary = 'C'
SELECT 'Npgsql73113'
Here is the profiler screenshot of the relevant section. Any one know why there is this extra query and multiple Connection Close events.
You are using Npgsql 2.2, which is very old by now and which sent these commands on startup. Please upgrade to the latest stable version (3.1.3) and these should be gone.
I'm less sure about the connection close events, if you see this behavior in 3.1.3 please report an issue.
Related
I'm using EF core 6 with NpgSql (connection pooling enabled) and I need to execute a native query for a specific use case. I'm trying to figure out the best way to manage the database connection for native queries inside an ASP.Net Core app. The query is executed inside a scoped service. I am doing the following and it's "working":
_dbContext.Database.OpenConnectionAsync(cancellationToken);
var npgsqlConnection = _dbContext.Database.GetDbConnection() as NpgsqlConnection;
await using var cmd = new NpgsqlCommand(Sql, npgsqlConnection);
// execute cmd, connection isn't closed by me
There are a couple of questions like this already but I wasn't satisfied with the answers. For example some say "if you open the connection you should close it". But this breaks the next time I use the DbContext.
It works if I don't close the connection and it doesn't exhaust the connection pool even if I execute it many times. I assume there is some magic going on and it's released back to the pool when the scope is disposed?
Other way would be to create & close the NpgsqlConnection directly. Is there a better way?
I have recently been changing some C# programs to add proper parameterizing to some MySQL statements that had originally been written with concatenated strings. Invariably, I've run into some problems with my statements and I can't find a way to directly see the complete MySQL statement with parameters applied other than this workaround that I have where I pass the MySQL command to this:
private string getMySqlStatement(MySqlCommand cmd)
{
string result = cmd.CommandText.ToString();
foreach (MySqlParameter p in cmd.Parameters)
{
string addQuote = (p.Value is string) ? "'" : "";
result = result.Replace(p.ParameterName.ToString(), addQuote + p.Value.ToString() + addQuote);
}
return result;
}
This works, but I was wondering if there was a more proper way to see the full statement with parameters applied. Reading up on this, it looks like the parameters aren't actually applied until it reaches the server - is this correct? In that case, I suppose I can stick to my function above, but I just wanted to know if there was a better way to do it.
Note: I am just using this function for debugging purposes so I can see the MySQL statement.
MySQL supports two protocols for client/server communication: text and binary. In the text protocol, there is no support for command parameters in the protocol itself; they are simulated by the client library. With Connector/NET, the text protocol is always used, unless you set IgnorePrepare=true in the connection string and call MySqlCommand.Prepare() for each command. So it's most likely the case that you are using the text protocol. This is good, because it will be easier to log the actual statements with parameters applied.
There are three ways to view the statements being executed:
Use Connector/NET Logging
Add Logging=true to your connection string and create a TraceListener that listens for the QueryOpened event. This should contain the full SQL statement with parameters interpolated. Instructions on setting this up are here.
Use MySQL Server Logging
Enable the general query log on your server to see all queries that are being executed. This is done with the --general_log=1 --general_log_file=/var/path/to/file server options.
Packet Sniffing
If you're not using SslMode=Required (to encrypt the connection between client and server), then you can use WireShark to capture network traffic between your client and the server. WireShark has MySQL Protocol analysers that will inspect MySQL traffic and identify command packets (that contain SQL queries). This option is ideal if you aren't able to modify your client program nor change server logging settings.
I am currently trying to do something that should be simple and straight-forward - connect to a database server, run a query, see if I get anything back and if so send it back to the user. This is the code I'm using to do it:
MySqlDataReader reader = MySqlHeaper.ExecuteReader(connectionString, $"SELECT * FROM table WHERE insertDateTime > '{DateTime.Now.AddSeconds(-1800).ToString("yyyy-MM-ddTHH:mm:ss")}'";
I have also tried this with a MySqlCommand and MySqlConnection object pair, and either way the result is the same - it takes approximately 7100ms to connect to the MySql server. I know that sounds like a problem that should be on ServerFault, but my testing tells me otherwise. When I use the command line MySql client to connect to my database server using exactly the same credentials and run exactly the same query I get my connection established and my data back in nothing flat. I don't know at this stage if it's a server setting or not, but here's what I've tried so far:
Rebooting the server
Restarting the MySQL server
Setting the skip_name_resolve setting to 1 in order to prevent reverse name lookups on connect
Using alternative means of querying the server (mysql command line client and MySQL Workbench)
Opening all AWS IAM permissions on the RDS instance to allow everything from the server
Nothing seems to be making any difference, so I'm at a loss to explain this terrible performance. It's also only happening when I open the connection. Running queries, inserts, what have you is lightning fast. Any suggestions anyone might have would be most helpful.
I would not expect IAM permissions to have any impact on performance. I would expect them to be either successful or not successful.
I would execute some diagnostic protocols to get more information.
1) Try a subsequent query, to see if it is an issue with the stack being initialized. Are subsequent queries faster?
2) Try a query that is just an identity query. Something that doesn't require any sort of IO.
3) Try a query from a different platform (maybe a scripting language like ruby or php)
Once you answer those it should help you narrow it down.
This is most likely caused by Connector/NET executing a slow WMI query to query connection attributes when opening the connection; this is logged as MySQL bug 80030.
As far as I know, this isn't fixed in newer versions of the driver, but you can work around it by switching to MySqlConnector, an OSS MySQL ADO.NET library.
Recently our QA team reported a very interesting bug in one of our applications. Our application is a C# .Net 3.5 SP1 based application interacting with a SQL Server 2005 Express Edition database.
By design the application is developed to detect database offline scenarios and if so to wait until the database is online (by retrying to connect in a timely manner) and once online, reconnect and resume functionality.
What our QA team did was, while the application is retrieving a bulk of data from the database, stop the database server, wait for a while and restart the database. Once the database restarts the application reconnects to the database without any issues but it started to continuously report the exception "Could not find prepared statement with handle x" (x is some number).
Our application is using prepared statements and it is already designed to call the Prepare() method again on all the SqlCommand objects when the application reconnects to the database. For example,
At application startup,
SqlCommand _commandA = connection.CreateCommand();
_commandA.CommandText = #"SELECT COMPANYNAME FROM TBCOMPANY WHERE ID = #ID";
_commandA.CommandType = CommandType.Text;
SqlParameter _paramA = _commandA.CreateParameter();
_paramA.ParameterName = "#ID";
_paramA.SqlDbType = SqlDbType.Int;
_paramA.Direction = ParameterDirection.Input;
_paramA.Size = 0;
_commandA.Parameters.Add(_paramA);
_commandA.Prepare();
After that we use ExceuteReader() on this _commandA with different #ID parameter values in each cycle of the application.
Once the application detects the database going offline and coming back online, upon reconnect to the database the application only executes,
_commandA.Prepare();
Two more strange things we noticed.
1. The above situation on happens with CommandType.Text type commands in the code. Our application also uses the same exact logic to invoke stored procedures but we never get this issue with stored procedures.
2. Up to now we were unable to reproduce this issue no matter how many different ways we try it in the Debug mode in Visual Studio.
Thanks in advance..
I think with almost 3 days of asking the question and close to 20 views of the question and 1 answer, I have to conclude that this is not a scenario that we can handle in the way we have tried with SQL server.
The best way to mitigate this issue in your application is to re-create the SqlCommand object instance again once the application detects that the database is online.
We did the change in our application and our QA team is happy about this modification since it provided the best (or maybe the only) fix for the issue they reported.
A final thanks to everyone who viewed and answered the question.
The server caches the query plan when you call 'command.Prepare'. The error indicates that it cannot find this cached query plan when you invoke 'Prepare' again. Try creating a new 'SqlCommand' instance and invoking the query on it. I've experienced this exception before and it fixes itself when the server refreshes the cache. I doubt there is anything that can be done programmatically on the client side, to fix this.
This is not necessarily related exactly to your problem but I'm posting this as I have spent a couple of days trying to fix the same error message in my application. We have a Java application using a C3P0 connection pool, JTDS driver, connecting to a SQL Server database.
We had disabled statement caching in our the C3P0 connection pool, but had not done this on the driver level. Adding maxStatements=0 to our connection URL stopped the driver caching statements, and fixed the error.
I am building a Winforms C# 2.0 application.
I have successfully been able to connect to my SLQ Server database using the following:
m_connexion = new SqlConnection("server=192.168.xxx.xxx;uid=...;pwd=...;database=...");
Because my company wanted to be able to use any database, I went on to use the Odbc driver and my commands went on like this:
m_connexion = new OdbcConnection("server=192.168.xxx.xxx;uid=...;pwd=...;database=...");
However, this throws out a System.InvalidOperationException. Any idea why?
I'm also trying to use a DSN, but the commend
OdbcConnection connection = new OdbcConnection("DSN=MyDataSourceName"); suggested here but it likewise throws my a System.InvalidOperationException
The connection string needs a Provider= so that the ODBC drivers know which server you're connecting to. In this case Provider=SQLSERVER I believe.
UPDATE: Should have been Provider=SQLOLEDB
I think you need to specify a driver. Look here for details: http://connectionstrings.com/sql-server-2005#21
If you specify a DSN, you have to configure the DSN using the ODBC control panel. It's called "Set up data sources (ODBC)" under Administrative Tools. The panel also has a "test" button, which might tell you more about what's going wrong.
P.S. Being "database independent" is much more work than using ODBC connection, command and datareader. You'd have to make sure your queries run on each target database, which you will not be able to do if you don't have a test server of each. So if I were you, I'd code it up using SqlConnection, since you already got that working.