I have developed a Web Application a standard web application to allow users to display and update a set of data from an SQL database.
The Web Application uses a AngularJS client side which interacts with the Web Server via MVC Web API calls to retrieve and update data on the database.
The Server side code is written in C# using .NET 4.5 and uses Entity Framework v6.0 to access the database.
The Web Application is hosted in an Azure Web App.
The Database is the Azure SQL Database.
The issue is that when the Application has not been used for about 10-15 minutes, then it is used again, the first data retrieval often takes over 10 seconds to return to the browser. After that the performance is fine until the next time the application is left unused.
I've put trace in the application and we see that the delay is when the connection opens. The actual query on the database runs sub-second.
I've noticed though that with different hosting configurations I get different results. In particular hosting in house and pointing to the Azure database does not encounter anywhere near the same delays.
I've changed one of the routines to use ADO.NET instead of Entity Framework and changed the trace to try to narrow it down further.
What I see is this:
ConnectionStringSettings ADOcnxstring = ConfigurationManager.ConnectionStrings["DevFEConnectAdo"];
DbConnection ADOconnection = new SqlConnection(ADOcnxstring.ConnectionString);
The delay is here (before the SQL has even been defined!
and then I build the command and do the DataReader etc:
DbCommand ADOcommand = ADOconnection.CreateCommand();
:
etc
So the delay is on opening the Connection to the database.
My connection string is standard:
<add name="DevFEConnectAdo" connectionString="data
source=feeunsqldevfeconnect.database.windows.net;initial
catalog=feeunsqldbdevfeconnect;persist security info=True;user id=???
#???;password=???;multipleactiveresultsets=True"></add>
15 minutes is too short for your app to be recycled (as suggested by CSharpRocks). I dont think its the issue here.
The delay is because a new Db connection is established upon first call after idle timeout. Typically if a connection is inactive for 4-10 minutes it will be closed. If a minimum pool size is specified, those connections will be kept alive even after idle timout expires.
Try using this connection string (adjust min pool size as per your needs)
<add name="DevFEConnectAdo" connectionString="data
source=feeunsqldevfeconnect.database.windows.net;initial
catalog=feeunsqldbdevfeconnect;persist security info=True;user id=???
#???;password=???;multipleactiveresultsets=True;Min Pool Size=3;Load Balance Timeout=180;"></add>
Further details
Why do we need to set Min pool size in ConnectionString
List of SQL Connection Properties - documentation
After some time, this eventually got resolved with some help from Microsoft Azure support.
The detail that I left out was that my Web App was actually pointing to 2 databases
- 1 the Application Azure SQL database, I was having the delay problem with
- A 'Data Warehouse' we had on an Azure Virtual Machine
Because of replication between inhouse database servers and the 'Data Warehouse' the Virtual Machine and Web App were all in a Azure Virtual Network.
The problem was there can be network problems if a Web App inside a Virtual network wants to talk to Azure SQL Databases (which cannot be within a Virtual Network).
My solution was to
configure an Endpoint on the Data Warehouse Virtual Server,
take the Web App out of the Virtual network and make it point to the Virtual Server by means of the Endpoint
At this point all the delays went away and I could take off the MinPool Size settings (and Timeout which I later discovered did nothing anyway).
Web apps are recycled after a few minutes of inactivity. Try enabling the Always On setting located in Settings/Application Settings in the portal to see if this helps with your issue.
Related
I have a SQL Server with two databases, a production database and a development database. The .net 2.0 website hitting the production database with manual SqlConnection code is working fine. The other database is being hit from a newer ASP.NET MVC app using Entity Framework 6.2 and is getting timeout issues. The timeout takes 30 seconds the first time, but the page comes back almost instantaneously on subsequent refreshes. Both websites are on the same box as the database, so are only using "localhost" to connect. They are using SQL Server user logins, not Windows authentication.
I copied the .edmx and .tt files into a .net console app and that app has no problem hitting the database with the exact same linq query and pulling the same data that is failing.
I then created a new web site and copied just that same code into an aspx page. It fails the first time with a timeout, and then works on subsequent attempts (and a week ago, the main dev site was doing the same thing).
I separated the dev database from the SQL Server 2008 R2 server and attached it to a newly installed instance of SQL Server Express on a different port, and get the same results.
The web server is windows server 2008 standard 32-bit. I copied both websites and the console application to a new box (I thought was 2016, but it turns out it is 2008 standard 64-bit) and get the same results.
The dev site was working up until a couple of months ago. The client was using local user accounts for everything, but had a domain and wanted to do testing with windows authentication for an old vb app that hits the same database, and I had started migrating testing accounts to the domain. When the client tried to later, for an unrelated reason, change his password, we discovered that he was already using a domain account, but that his laptop could not connect to the domain. We found several other computers that could not connect, even though the machines I had connected to the domain during my testing were working fine. An outside network "friend" was brought in to figure out what was going on. At that point, I lost all track of what was actually done. I know that different network and domain configurations were tried and didn't fix the domain issues, but I don't know what. However, the production site was never rendered inoperative.
I have no idea what is going on. Does anyone else?
Oh, and in case it was a provider issue, I've also tried manual connection using OleDbconnection from the web app, and it also fails with the Timeout issue.
Update:
I spun up a new DataCenter 2016 box, installed IIS and .net on it and copied the website to that box. It has no problems hitting the database and pulling the data from the other server.
I know patches and such were updated on the original box while the domain and network were being manipulated, but I don't know how far behind they were. I suspect that some patch changed some default or inherited .net configuration options or something. I did do a "repair" on the .net installation, and that didn't make a difference. However, with the production site working fine, I'm not currently willing to uninstall .net or anything else. I'm afraid I would risk pushing this same error into the production site and the client would be screwed.
It seems that for some reason, the timeout period elapsed while attempting to consume the pre-login handshake acknowledgement.
Try increasing the connect timeout property in your connection string to 60 or more. Default is 15 (in seconds).
Example: Data Source=(LocalDB)\v11.0;Integrated Security=True;Connect Timeout=30
I have a ASP.net Web API deployed on azure and I also have there a MySql database in separate virtual machine running on linux. The problem I have is that when I restart the database and redeploy the web API from visual studio the connection between web API and mySql is working fine but after like 30 minutes I get this error:
"Unable to connect to any of the specified MySQL hosts"
If I want to make it work again i have to restart the virtual machine with database and redeploy the web API from visual studio. I am using connection string like this to connect in my web config
<add name="DefaultConnection" connectionString="Server=publicIpAddress;Port=3306;Database=db_12345_db;Uid=user;Pwd=*********;" providerName="MySql.Data.MySqlClient" />
This connection was working on server we had before we switched to azure. I suspect I did not configure azure correctly.
Any idea how to fix this issue ? Thanks
Connections Dropped
When connecting to a Azure SQL Database, idle connections may be terminated by a network component (such as a firewall) after a period of inactivity. There are two types of idle connections, in this context:
Idle at the TCP layer, where connections can be dropped by any number
of network devices.
Idle by the SQL Azure Gateway, where TCP keepalive messsages might be occurring (making the connection not idle from a TCP perspective), but not had an active query in 30 minutes. In this scenario, the Gateway will determine that the TDS connection is idle at 30 minutes and terminate the connection.
https://github.com/sidorares/node-mysql2/issues/316
Hope this works for you.
I have database and mvc application hosted on iis. I periodicaly gather data from internet and save them in sql database. And i calculate statistic and graphs from these data and publish them in mvc application.
Problem is that iis have recycling period about 1 hour -> meaning that my timer(function) which gather data from interenet is stoped whenever there is server restart, recycling or there is no request on the web page.
solutions i have found are:
turn of recycling - i don't own srv can't do that.
windows service - 99% hostings don't allow host ws...
So is there any solution, service, framework, which purpose is to gather data and i can be sure that it will not stop after some inactivity time or server restart? or is my logic completely wrong and i need to gather data diferently? can it be done on hosting which i don't own? can it be done using iis?
can it be done using iis?
If the IIS in question has app fabric installed, then that supports an auto start feature, which effectively lets you write 'service like' code which will keep running in the background.
Quick overview here
I am trying to solve the problem of very long response times from MySQL when opening a connection using the MySQL Connector for .net.
I have installed MySQL 5.5 running on an Azure VM (Server 2008) with --skip-name-resolve, and the database user accounts' host restrictions are using IP addresses. I am using the latest MySQL Connector for .net in my WCF service running on Azure (in the same location US- East, I have been using a trial subscription, no affinity set). My connection string in the WCF service is using the internal IP address of the VM hosting MySQL as the server parameter value. I also have "pooling = true;Min Pool Size=2;" just in case (I have tried without these parameters too).
When tracing the WCF the query response time once the service is running and processing requests are pretty good (even where each query result is unique and so not being cached) and I have no issues with the performance of MySQL providing it's getting hit frequently.
But the huge problem I haven't been able to crack is the length time it takes to get the connection to MySQL Open after no calls to the database have been made for about 3 or 4 minutes. If no database calls are made for a few minutes it takes 8 or 9 seconds or more to open the connection again. I wrapped the actual "conn.open();" with trace statements before and after calling, and this is the behaviour I am seeing logged time and time again after a few minutes of inactivity.
Incidentally, I have also tried (and still am using) the 'using' style of connection handling to ensure that the MySQL Connector is managing the connection pool.
e.g.:
using (var conn = new MySqlConnection(Properties.Settings.Default.someConnectionString)) { ... statements ..}
I feel like I have reached a dead end on this one so any suggestions would be greatly appreciated.
I can explain your question "the length time it takes to get the connection to MySQL Open after no calls to the database have been made for about 3 or 4 minutes. If no database calls are made for a few minutes it takes 8 or 9 seconds or more to open the connection again." why it happens:
The Windows Azure websites uses concept of hot (active) and cold (inactive) sites in which if a websites has no active connection the site goes it cold state means the host IIS process exits. When a new connection is made to that websites it takes a few seconds to get the site ready and working. While you have MySQL backend associated to this website, it take a few more seconds longer to get the requested served as there is some time taken by IIS host process to get started. That is the reason after few minutes of in activity the the response time is longer.
You can see the following presentation for more details on Windows Azure Hot (active) and Cold (inactive) Websites:
http://video.ch9.ms/teched/2012/na/AZR305.pptx
As this time, I am not sure and do not know how you can keep the websites always hot, even if moving to shared website or it is not possible at all. What I can suggest you to write your issue to Windows Azure WebSites Forum and someone from that team will provide you an appropriate answer.
I am writing a application in C# that needs to do the following:
without connecting to the database I need to check if there are some new logs in database. If there are then I am allowed to open the connection and retrieve them.
So I just need to know if there are new logs (elements) in the database WITHOUT opening the connection to it.
Server can send mail to administrator and I could monitor mailbox for changes but that solution is unacceptable.
Can server on inserting new rows create *.txt file on disk with text indication new rows which I can check and delete/edit after downloading change?
(database is in SQL Server 2008 R2)
Is it even possible? Any/And/Or other options to do this are welcome.
Thank you all very much in advance.
Based on the following clarifying comments from the OP under the question:
There is Web application which checks for change every 30 sec and shows latest authorizations. Database is tracking employee authorization and has frequent updates. Now I'm building desktop application, which has local connection to the server and can update more frequently, but client does-not want application to open connection every sec, aldo connection is opened for several ms.
I think that the appropriate solution is a business layer.
If you build a business layer hosted in IIS that performs the database access on behalf of the users using a single database user for access (the application pool user or an impersonated user within the web application), then connection pooling will reduce the number of connections made to the database significantly.
Here is an MSDN article that describes the mechanics and benefits of connection pooling in great detail.
All of the clients, including the web layer, would connect to the business layer using WCF or .Net Remoting (depending on your .Net version), and the business layer would be the only application performing database access.
The added benefit of this approach is that you can move all database access (including from the web client) inside the DMZ so that there is no direct database access from the DMZ outward. This could be a good selling point for your customer.
We use this mechanism extensively for very large, very security and performance conscious customers.
Update
As an alternative, you could have the business layer query the database every 30 seconds, extract the necessary information, and store it locally to the business layer in a database of some sort (Access, Sql Server Express, etc). When requests from clients are received, they will be served from the local data store instead of the database.
You could do this by kicking off a background thread in global.asax's Application_Start event or by adding a cache entry that expires every 30 seconds and performing the work in the cache timeout event.
This will reduce the number of connections to 1 (or 2 if the web isn't modified) every 30 seconds (or whatever the time is).
Try to monitor files change date inside DB folder.
If the client desktop application is not going to be deployed massively you could use SqlDependency. Then you wouldn't have to poll the database on a frequent basis, instead the database will notify you if something changes.
You could also deploy a service on the server which uses SqlDependency and then connect to this service from your desktop applications.
If that's not an option this document mentions some other options.
These two could be applied to your situation:
Create an AFTER UPDATE trigger on the table being monitored, whose action uses SQL Server Service Broker to send a message to the entity needing the notification.
Use Windows Server App Fabric Cache, which supports a change notifications mechanism, based on an in-memory object cache and callback functions you register with the objects.