We have written several C# web services that have a connection to our internal Firebird 2.5.5 database.
Unfortunately the exception "Error reading data from the connection" is thrown more and more often and we don't know how to fix it.
We tried to disable pooling but this did not have the desired effect.
We also wrote a try catch block that reconnects and re-executes the SQL, but this does not seem to us to be the right solution.
Is there another option?
Here are some environment informations:
C# 7.0
.NET 4.5
Firebird Version 2.5.5
Firebird Driver 5.5.0
The Firebird log does not show any error messages at that time
The error happens from time to time with any sql statement
The problem is relatively simple: the network connection between client and server is interrupted or broken for some reason, but the State of the client connection remains Open - even though you cannot use that connection anymore. Unfortunately Firebird decided to not update this status to Broken automatically, which would make a lot more sense if you ask me.
You already figured out that reopening the connection "somewhat fixes" the problem, and we have discussed that you could do this only when FbException.ErrorCode is 335544726.
Unfortunately this does mean that any open transaction is also lost, and you cannot commit any data from it anymore. The only way I could think of to reliably recover from this situation is to rethrow the exception:
try
{
// ...
}
catch (FbException ex)
{
if (ex.ErrorCode == 335544726)
{
// close the connection (reopen depending on your application)
}
throw;
}
This way, you can catch this exception at a higher level in your application, and deal with it however is appropriate at that point - ie. retrying the entire transaction, or letting the user choose what to do.
Related
I have a long running application that uses NHibernate.ISessionFactory to connect to an Oracle database.
Occasionally the database goes offline (e.g. for weekend maintenance), but even once the database is back online, subsequent queries fail with the following exception (inner exceptions also shown):
NHibernate.Exceptions.GenericADOException: could not execute query
[ select .....]
>> Oracle.ManagedDataAccess.Client.OracleException: ORA-03135: Connection lost contact
>> OracleInternal.Network.NetworkException: ORA-03135: Connection lost contact
>> System.Net.Sockets.SocketException: An established connection
was aborted by the software in your host machine
Restarting the application restores the functionality, but I would like the application to be able to automatically cope without a restart, by "resetting" the connection.
I have tried the following with my ISessionFactory when I hit this exception:
sf.EvictQueries();
sf.Close();
sf = null;
sf = <create new session factory>
but see the same exception after recreating the ISessionFactory. I assume this is because NHibernate is caching the underlying broken connection in some kind of connection pool?
How can I persuade NHibernate to create a genuinely new connection (or even just reset all state completely), and hence allow my application to fix the connection issue itself without an application restart?
EDIT:
Following A_J's answer, note that I am already calling using (var session = _sessionFactory.OpenSession()) for each database request.
I suspect you are opening ISession (call to ISessionFactory.OpenSession()) at startup and closing it at application end. This is wrong approach for any long running application.
You should manage connection at lower level of time. In web application, this is generally handled per request. In your case, you should find what that should be. If yours is windows service that does some activity after specified time then Timer_Tick event is good place.
I cannot suggest what that location could be in your application; you need to find out on your own.
Edit 1
Looking at your edit and comment, I do not think this has anything to do with NHibernate. May be that the connection pool is returning a disconnected/stale connection to NHibernate.
Refer this and this accepted answer.
I have been facing the following error intermittently.
Authentication to host '127.0.0.1' for user 'root' using method 'mysql_native_password' failed with message: Reading from the stream has failed.
It shots up any time and I am at my wits end. I also posted a bug on MySQL bugs and solutions are not proving to be effective in any way.I hope you guys can help me out.
Here is the link to MySQL Bug for details: Never seems to go away!
Some more detail: I have a client-server system but this bug occurs on the server system(where MySQL database is installed) when a local running app on the server system tries to run a query.
I had already opened a question here but since has been dead. Just a caveat I thought that skip-name-resolve solved the issue but it seems to just have lowered the frequency. Hope someone would help me out this time around.
EDIT: The MySQL guys say that in a client server setup server may close a connection if it is unused for a long time. However, this is not what I am facing as I create a new connection everytime I want to execute a query. I made this point clear in the last comment on the MySQL Bugs.
Guys I tried this: "SslMode=None" in the connection string, but if you need SSL then read this:
http://www.voidcn.com/article/p-phfoefri-bpr.html
here is a sample connection string that works:
connectionString="Server=192.168.10.5;Database=mydata;Uid=root;Pwd=****;SslMode=None"
Hope this helps
I've been getting this error, quite frequently with Amazon's MySQL RDS instances. And most multi-AZ instances.
It would be interesting to compare notes to see if others mostly get this issue with RDS also?
Amazon is known to rely heavily on "fast" DNS changes to switch over stuff with things like ELBs. I wonder if the same thing is happening with RDS? Or some other internal AWS switching is messing up the idle connections in the pool.
This would explain why the Oracle devs can't reproduce it and don't see it as much of an issue.
Anyway I've had to just deal with it and add retry logic when opening a connection.
This issue is caused by Ssl.
Solution 1: SSL is not required. Since it is caused by SSL, we can turn off SSL by appending "SslMode=None" to the connection string.
Solution 2: SSL is required, server identity is important and needs to be verified. The server needs a internet connection to do the cert verification. Please note the crypto API doesn't update CTL for every process. The CTL is maintained at operating system level. Once you connect the server to connect and make an SSL database connection to the server, the CTL will be updated automatically. Then you may disconnect the internet connection. Note again the CTL has its expiration date and after that the Windows needs to update it again. This will occur probably after several months.
Solution 3: SSL is required but the server identity is not important. Typically SSL is only used to encrypt the network transport in this case. We can turn off CTL update:
Press Win+R to open the "Run" dialog
Type "gpedit.msc" (without quotes) and press Enter
In the "Local Group Policy Editor", expand "Computer Configuration", expand "Administrative Templates", expand "System", expand "Internet Communication Management", and then click "Internet Communication settings".
In the details panel, double-click "Turn off Automatic Root Certificates Update", clickEnabled, then click OK. This change will be effective immediatelly without restart.
http://www.voidcn.com/article/p-phfoefri-bpr.html
unfortunately, this error occurs if the application and mysql are on the same computer, if you move it to a different computer it is fine.
I tried many ways but for now there is no other solution. bug has been reported many times by others https://bugs.mysql.com/bug.php?id=76597
I had the exact same problem performing the upgrade on a windows form application. The solution I found was to change the server, because that one was in trouble. On the server that was presenting the similar situation you described had installed WordPress with MYSQL 5.6.34, on the other I did a clean install with MYSQL version 5.6.26.
I don't know if it has to do with the environment variables used. I believe it has nothing to do with Connection Timeout, if it is a property that is used only with the open connection. This error occurred in a shared environment as well as in a local installation with Maria DB. Another problem I found was that one of the selection commands that retrieved the data was having a problem in its formation not respecting the blanks:
SELECT COLUNA1, COLUNA2 FROM TABLE;
I made the change to SELECT COLUMN1, COLUMN2 FROM TABLE;
I am still testing on this solution I presented, and as of the time of posting there were no more errors.
I was getting the error
Authentication to host 'localhost' for user 'root' using method 'mysql_native_password' failed with message: Reading from the stream
I solved it when I put SslMode=None in my connection string.
However, I checked that the message is different from you
Check my connection
connection.ConnectionString = "server=myadressserver;userid=myuser;password=mypassword;database=test;SslMode=None";
When I try to access (open a connection to) an offline sql server instance (service turned off) from my web service, no exception is thrown, just a brief 5 sec timeout followed by return (I put the breakpoint way out in my controller, not sure what the connection object returns yet during the call to open).
I'm trying to simulate a scenario where the DB is not available to the webservice, and figured an exception would be thrown and I could just log the error.
Any suggestions on how to properly detect DB connection issues (I'm guessing I need to look to see what the connection object returns when calling open). It'd be nice to just have an exception bubble up though.
Thanks.
A connection timeout will be thrown for sure unless your thread is being aborted before that by a web server timeout. Placing a try/catch in your controller would certainly catch the DB connection timeout.
You should post code, as SqlConnection.Open() definitely would throw an exception but if you're using some other call/code to open the connection and it's getting swallowed then it is obviously difficult to determine a root cause.
My guess is that you are getting back a Connection object that is not connected, to check if it's connected:
if (conn.State == ConnectionState.Closed)
{
...
}
I'm looking for a way to check if a server is still available.
We have a offline application that saves data on the server, but if the serverconnection drops (it happens occasionally), we have to save the data to a local database instead of the online database.
So we need a continues check to see if the server is still available.
We are using C# for this application
The check on the sqlconnection.open is not really an option because this takes about 20 sec before an error is thrown, we can't wait this long + I'm using some http services as well.
Just use the System.Net.NetworkInformation.Ping class. If your server does not respond to ping (for some reason you decided to block ICMP Echo request) you'll have to invent your own service for this. Personally, I'm all for not blocking ICMP Echo requests, and I think this is the way to go. The ping command has been used for ages to check reachability of hosts.
using System.Net.NetworkInformation;
var ping = new Ping();
var reply = ping.Send("google.com", 60 * 1000); // 1 minute time out (in ms)
// or...
reply = ping.Send(new IPAddress(new byte[]{127,0,0,1}), 3000);
If the connection is as unreliable as you say, I would not use a seperate check, but make saving the data local part of the exception handling.
I mean if the connection fails and throws an exception, you switch strategies and save the data locally.
If you check first and the connection drops afterwards (when you actually save data), then you still would still run into an exception you need to handle. So the initial check was unnecessary. The check would only be useful if you can assume that after a succesfull check the connection is up and stays up.
From your question it appears the purpose of connecting to the server is to use its database. Your priority must be to check whether you can successfully connect to the database. It doesn't matter if you can PING the server or get an HTTP response (as suggested in other answers), your process will fail unless you successfully establish a connection to the database. You mention that checking a database connection takes too long, why don't you just change the Connection Timeout setting in your application's connection string to a more impatient value such as 5 seconds (Connection Timeout=5)?
If this is an sql server then you can just try to open a new connection to it. If the SqlConnection.Open method fails then you can check the error message to determine if the server is unavailable.
What you are doing now is:
use distant server
if distant server fails, resort to local cache
How to determine if the server is available? Use a catch block. That's the simplest to code.
If you actually have a local database (and not, for example, a list of transactions or data waiting to be inserted), I would turn the design around:
use the local database
regularly synchronize the local database and the distant database
I'll let you be the judge on concurrency constraints and other stuff related to your application to pick a solution.
Since you want to see if the database server is there either catch any errors when you attempt to connect to the database or use a socket and attempt a raw connection to the server on some service, I'd suggest the database as that is the resource you need.
Every now and then in a high volume .NET application, you might see this exception when you try to execute a query:
System.Data.SqlClient.SqlException: A transport-level error has
occurred when sending the request to the server.
According to my research, this is something that "just happens" and not much can be done to prevent it. It does not happen as a result of a bad query, and generally cannot be duplicated. It just crops up maybe once every few days in a busy OLTP system when the TCP connection to the database goes bad for some reason.
I am forced to detect this error by parsing the exception message, and then retrying the entire operation from scratch, to include using a new connection. None of that is pretty.
Anybody have any alternate solutions?
I posted an answer on another question on another topic that might have some use here. That answer involved SMB connections, not SQL. However it was identical in that it involved a low-level transport error.
What we found was that in a heavy load situation, it was fairly easy for the remote server to time out connections at the TCP layer simply because the server was busy. Part of the reason was the defaults for how many times TCP will retransmit data on Windows weren't appropriate for our situation.
Take a look at the registry settings for tuning TCP/IP on Windows. In particular you want to look at TcpMaxDataRetransmissions and maybe TcpMaxConnectRetransmissions. These default to 5 and 2 respectively, try upping them a little bit on the client system and duplicate the load situation.
Don't go crazy! TCP doubles the timeout with each successive retransmission, so the timeout behavior for bad connections can go exponential on you if you increase these too much. As I recall upping TcpMaxDataRetransmissions to 6 or 7 solved our problem in the vast majority of cases.
This blog post by Michael Aspengren explains the error message "A transport-level error has occurred when sending the request to the server."
To answer your original question:
A more elegant way to detect this particular error, without parsing the error message, is to inspect the Number property of the SqlException.
(This actually returns the error number from the first SqlError in the Errors collection, but in your case the transport error should be the only one in the collection.)
I had the same problem albeit it was with service requests to a SQL DB.
This is what I had in my service error log:
System.Data.SqlClient.SqlException: A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)
I have a C# test suite that tests a service. The service and DB were both on external servers so I thought that might be the issue. So I deployed the service and DB locally to no avail. The issue continued. The test suite isn't even a hard pressing performance test at all, so I had no idea what was happening. The same test was failing each time, but when I disabled that test, another one would fail continuously.
I tried other methods suggested on the Internet that didn't work either:
Increase the registry values of TcpMaxDataRetransmissions and TcpMaxConnectRetransmissions.
Disable the "Shared Memory" option within SQL Server Configuration Manager under "Client Protocols" and sort TCP/IP to 1st in the list.
This might occur when you are testing scalability with a large number of client connection attempts. To resolve this issue, use the regedit.exe utility to add a new DWORD value named SynAttackProtect to the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\ with value data of 00000000.
My last resort was to use the old age saying "Try and try again". So I have nested try-catch statements to ensure that if the TCP/IP connection is lost in the lower communications protocol that it does't just give up there but tries again. This is now working for me, however it's not a very elegant solution.
use Enterprise Services with transactional components
I have seen this happen in my own environment a number of times. The client application in this case is installed on many machines. Some of those machines happen to be laptops people were leaving the application open disconnecting it and then plugging it back in and attempting to use it. This will then cause the error you have mentioned.
My first point would be to look at the network and ensure that servers aren't on DHCP and renewing IP Addresses causing this error. If that isn't the case then you have to start trawlling through your event logs looking for other network related.
Unfortunately it is as stated above a network error. The main thing you can do is just monitor the connections using a tool like netmon and work back from there.
Good Luck.
You should also check hardware connectivity to the database.
Perhaps this thread will be helpful:
http://channel9.msdn.com/forums/TechOff/234271-Conenction-forcibly-closed-SQL-2005/
I'm using reliability layer around my DB commands (abstracted away in the repository interfaece). Basically that's just code that intercepts any expected exception (DbException and also InvalidOperationException, that happens to get thrown on connectivity issues), logs it, captures statistics and retries everything again.
With that reliability layer present, the service has been able to survive stress-testing gracefully (constant dead-locks, network failures etc). Production is far less hostile than that.
PS: There is more on that here (along with a simple way to define reliability with the interception DSL)
I had the same problem. I asked my network geek friends, and all said what people have replied here: Its the connection between the computer and the database server. In my case it was my Internet Service Provider, or there router that was the problem. After a Router update, the problem went away. But do you have any other drop-outs of internet connection from you're computer or server? I had...
I experienced the transport error this morning in SSMS while connected to SQL 2008 R2 Express.
I was trying to import a CSV with \r\n. I coded my row terminator for 0x0d0x0a. When I changed it to 0x0a, the error stopped. I can change it back and forth and watch it happen/not happen.
BULK INSERT #t1 FROM 'C:\123\Import123.csv' WITH
( FIRSTROW = 1, FIELDTERMINATOR = ',', ROWTERMINATOR = '0x0d0x0a' )
I suspect I am not writing my row terminator correctly because SQL parses one character at a time right while I'm trying to pass two characters.
Anyhow, this error is 4 years old now, but it may provide a bit of information for the next user.
I just wanted to post a fix here that worked for our company on new software we've installed. We were getting the following error since day 1 on the client log file: Server was unable to process request. ---> A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.) ---> The semaphore timeout period has expired.
What completely fixed the problem was to set up a link aggregate (LAG) on our switch. Our Dell FX1 server has redundant fiber lines coming out of the back of it. We did not realize that the switch they're plugged into needed to have a LAG configured on those two ports. See details here: https://docs.meraki.com/display/MS/Switch+Ports#SwitchPorts-LinkAggregation