Entity Framework startup slow on everyone's computer but mine - c#

We started developing an application in C# recently and decided to try Entity Framework 6.1.3 with a code-first approach to handle persistence. The data is being stored in a SQL Server Express 2012 instance which is running on a local server. The application is still very small. Currently we are only persisting two POCOs and have two tables in the database: one with 5 fields (about 10 rows of data), and one with 4 fields (2 rows).
We are experiencing a 3 second delay when Entity Framework processes the first query with subsequent queries being extremely quick. After some searching we found that this is normal for Entity Framework so, although it seemed excessive for such a small database, we've been coping with it.
After doing some work today, none of which was specifically related to persistence, suddenly I found that the first query was only taking a quarter of a second to run. I couldn't see anything obvious in the changes I'd made so I uploaded them to source control and asked my colleague to download everything. When he builds and runs it on his computer, it still takes 3 seconds for the first query. I've compiled the application, tried it on two test computers and they experience the initial 3 second delay.
There seems to be no correlation between the problem and the computers/operating systems. My computer is running Windows 7 SP1 x64. My colleague's development computer is running Windows 10 x64. The other two test computers are running Windows 7 SP1 x86. They are all a similar specification (Core i5, 4GB/8GB RAM).
In my research of the delay I found that there are several things you can do to improve performance (pre-generating views etc) and I have done none of this. I haven't installed anything or made any changes to my system, although I suppose it's possible an update was installed in the background. Nothing has changed on the database server or in the database itself or in the POCOs. We are all connecting to the same database on the same server.
This raises a few obviously related questions. If it's possible for it to start up in a quarter of a second, why has it been taking 3 seconds up until now? What happened on my computer to suddenly improve performance and how can I replicate this on the other computers that are still slow?
Can anyone offer any advice please?
EDIT
I turned on logging in Entity Framework to see what queries were being run and how long they were taking. Before the first and, for testing purposes, only query is run, EF runs 3 queries to do with migration. The query generated to retrieve my data follows that and is as follows:
SELECT
[Extent1].[AccountStatusId] AS [AccountStatusId],
[Extent1].[Name] AS [Name],
[Extent1].[Description] AS [Description],
[Extent1].[SortOrder] AS [SortOrder]
FROM [dbo].[AccountStatus] AS [Extent1]
-- Executing at 28/01/2016 20:55:16 +00:00
-- Completed in 0 ms with result: SqlDataReader
As you can see it runs really quickly which is hardly surprising considering there are only 2 records in that table. The 3 migration queries and my query take no longer than 5ms to run in total on both my computer and my colleague's computer.
Copying that query in to SSMS and running it from various other machines produces the same result. It's that fast it doesn't register a time taken. It certainly doesn't look like the query causes the delay.
EDIT 2: Screenshots of diagnostic tools
In order to give a good comparison I've altered the code so that the query runs at application start. I've added a red arrow to indicate the point at which the form appears. I hadn't noticed before but when my colleague runs the application the first time after starting Visual Studio, it's about a second quicker. All subsequent times are slower.
1) Colleague's computer - first run after loading Visual Studio
2) Colleague's computer - all subsequent runs
3) My computer - all runs
So every time my colleague runs the application (apart from the first time) there is a second's pause in addition to the usual delay. The first run immediately after starting Visual Studio seems to eliminate this second's pause but it's still no where near the speed on my computer.
Incidentally, there is a normal delay of around a quarter of a second caused by the application starting. If I change the application to require a button click for the first query, the second's pause and usual 2 second delay happen only after the button is clicked.
Another thing of note is the amount of memory the application uses. Most of the time on my computer it will use around 40MB but on the other computer it never seems to use more than 35MB. Could there be some kind of memory optimisation going on that is slowing things down for the other computer? Maybe my computer is loading some additional/cached information in to memory that the others are having to generate. If this is possible, any thoughts on where I might look for this?
EDIT 3
I've been holding off making changes to the model and database because I was worried the delay would come back and I'd not have anything to test against. Just wanted to add that after exhausting all other possibilities, I've tried modifying a POCO and the database and it's still quick on my computer but slow on others.
I've altered the title of this question to more accurately reflect the problem I'm trying to solve.

Query plans in SQL Server can change over time. It may be that your machine has cached a good query plan, while your co workers machine has not. In other words, it may have nothing to do with EF. You could potentially confirm or deny this theory by running the same query by hand in management studio.

In order to tackle performance problems related to EF generated queries, I advice you to use Profiler or an alternative (as Express Editions do not have it) to see how much of your time is actually consumed running the query.
If most of your time is used for running the query, as already suggested by jackmott, you can run it in SSMS by toggling Actual Execution Plan to see the generated plan in each situation.
If time is spent on something else (some C# warming up or something similar), Visual Studio 2015 has builtin performance analysis that can be used to see where it is spending that time.

Related

SQL Server Application - Resetting database to original state

Background
I need to write some integration tests in C# (about 120 of them) for a C#/SQL Server application. Now, initially before any test, the database will already be there, the reason it will be there is because lot of scripts will be run to set it up (about 20 minutes running time). Now when I run my test, few tables will be updated (CRUD operations). For e.g. in 10-11 tables, few rows will be added and say in 15-16 tables, few rows will be updated and in 4-5 tables few rows will be deleted.
Problem
After every test is run, the database needs to be reset to it's original state. How can I achieve that?
Bad Solution
After every run of a test, re-run the database creation scripts (20 minutes of running time). Since there will be around 120 tests, this comes to 40 hours which is not an option. Secondly there is a process that has several connections open against this database so the database cannot be dropped/re-created.
Good Solution?
I would like to know if there is any other way of solving this problem? Another problem I have is that, for each of those tests, I don't even know what tables will be updated and I will have to manually go and check to see what tables were updated anyway if I were to delete, revert the database to it's original state manually by writing queries.
You should take a look at the possibilities MSSQl gives you with taking a snapshot of your database. It is potentially a lot faster than reverting to a backup or recreating the database.
Managing a test database
In a testing environment, it can be useful when repeatedly running a test protocol for the database to contain identical data at the start of each round of testing. Before running the first round, an application developer or tester can create a database snapshot on the test database. After each test run, the database can be quickly returned to its prior state by reverting the database snapshot.

Reducing time between starting an MVC web application and viewing the first page

Environment:
Visual Studio 2015 Update 1
IIS Express 10
Moderately sized MVC web application
.NET Framework 4.6.1 x64 debug builds
Newer Core I7 laptop, plenty of ram, SSD drive
Making a small change in one projects .cs file, hit the green arrow to test out the change. It takes about 8 seconds for the build to finish and Chrome to pop open a new tab. Not bad. But then it takes about ~30 seconds for the first page to show up.
What can be done to reduce that delay? Would pre-compiled views be the first order improvement here? What are some of the best current techniques to achieve that?
Try installing Glimpse. Correctly set up it will show you where the delay is including the database calls and their duration etc.
Install Redgate Ants and step through the code locally (potentially pointing at the production database if this is a live problem), this tool should be able to tell you where any slow down is.
One of the features is:
Jump straight to the slowest activity
The call tree in the .NET performance profiler shows you data for
every method and identifies the most expensive methods, database
queries, and web requests
There is a 14 day free trial, which should be enough time to diagnose your problem.

Rebuild Index Task Breaks Compliation of Stored Procedure

I have a maintenance plan that runs on my SQL Server 2008 server every morning before business hours. It was put in place a few years ago to help with some performance issues. The problem that I am seeing is that after that rebuild index finishes, there is a stored procedure in one of the databases that will go from taking nine seconds to run to taking seven minutes to run.
The solution I have found to fix it is to open SQL Management Studio and run:
EXEC sp_recompile N'stored_proc_name';
EXEC stored_prod_name #userId=579
After I run that, the SP fixes itself and goes back to running under nine seconds.
I've tried a couple of different paths to automate this, but it will only work if I run it from my computer through management studio. I tried to wrap it up in a little C# executable that ran a few minutes after the rebuild index job completes, but that didn't work. I also tried creating a SQL job to run it on the server after the rebuild index job completes, but that didn't work either. It has to be run from management studio.
So, two questions:
How can I stop rebuild index from breaking my SPs, or,
Any ideas on how or why my quick fix will only work in a very specific situation?
Thanks,
Mike
This sounds like standard parameter sniffing / parameter-based query-plan caching. The trick here is usually to use the OPTIMIZE FOR / UNKNOWN hint - either for the specific parameter that is causing the problem, or simply for all parameters. This makes it much less likely that a parameter-value with biased distribution will negatively impact the system for other values. A more extreme option (more useful when using command-text, not so useful when using stored procedures) is to embed the value directly into the TSQL rather than using a parameter. This... has impact, however, and should be used with caution.
In your case, I suspect that adding:
OPTION (OPTIMIZE FOR (#userId UNKNOWN))
to the end of your query will fix it.

Unexplained timeouts when running stored procedures

Background - I have a website & a windows scheduled job which are a part of an MSI and get installed on the same server. The website is used by the end-user to create some rules and the job is scheduled to run on a daily basis to create flat files for the rules created by end-user. The actual scenarios are way more complex than explained above.
Problem (with the website) - The website is working fine most of the times, but some times it just wont load the rule creation page - and the exception being logged it 'query timeout or SQL server not responding'
Problem (with the job) - The job is behaving just like the website and fails some times with the exception - 'query timeout or SQL server not responding'
What I've tried -
I've added 'Connection Timeout' to the SQL connection string - doesn't seem to help with the logging - which would tell me if it was a SQL connection timeout or a query timeout.
I've also run the stored procedures which are called by the website & job - and ALL the stored procedures complete well within the business defined timeout of 3600 seconds. The stored procedures actually complete in under a minute.
I've also run SQL profiler - but the TRACES also didn't help me - though I could see a lot of transactions but I couldn't justify something being wrong with the server.
What I seek - Are there any other reasons which could cause this? Is there something which I could look for?
Technology - SQL Server 2008 R2, ASP.Net, C#.Net
Restrictions - The code details can't be revealed due to client confidentiality, though I'm open to questions - which I'd try to answer keeping client confidentiality in mind.
Note - There is already a query timeout (3600s) & Connection Timeout
(30s) defined in the applicaiton config file.
So, I tried a few things here and there and was able to figure out root cause -
The SQL stored procedure was joining 2 tables from 2 different databases - one of which had varying number of records - these records were being updated/inserted by a different (3rd party) job. Since the time of the 3rd party job and my job was not same - no issue came up due to table locks, but the sheer volume of records caused my job to timeout when my timeout was not enough.
But, as I said I've given the business standard command timeout of 3600 seconds - somehow Enterprise Library was overriding my custom timeout with its own default command timeout of 30s - and hence the C# code part would come throw an exceptions even before the stored procedure had completed executing.
What I did - This may be of help for some of us -
I removed the reference of Enterprise Library from the project
Cleaned up my solution and checked into SVN.
Then cleaned up SVN as well.
I didn't build the application after removing Enterprise Library reference - obviously it wouldn't build due to reference errors.
After that, I took a clean checkout and added Enterprise Library again.
Now it seems to work even with varying number of records.
Just had the same problem also yesterday. Had a huge query taking 18 sec in SQL Server but was running out in C# even after 200 sec. I rebooted my computer disconnect the DB and even disconnect the server... nothing changed.
After reading some threads, I've notice a common feed about indexes. So I removed some indexes in my database, put some back and voilĂ !. Back to normal.
Here's maybe I thought could had happened. While I was running some test, I probably still had some zombie connections left and my colleague was creating some tables in the DB at the same time and linked them to tables used in my stored procedure. Even if the newly created tables had nothing to do with the stored procedure, having them linked with the other ones seems to have messed up with the indexes. Why only the C# couldn't work properly? My guess is there a memory cache in SQL Server not accessible when connecting some place else than SQL Server directly.
N.B. In my case, just altering the stored procedure didn't have any effect at all, even if it was a common "solution" among some threads.
Hope this helps if someone has the same problem. If anyone can find a better solution/explanation, please share!!!
Cheers,
I had similar problem with mssql and did not find any particular reason for this unstable behavior. My solution was to have the db re-indexed with
sp_updatestats
every hour.
You can use WITH RECOMPILE in your stored procedure definition to avoid the error of 'query timeout or SQL server not responding'
Here's the Microsoft article:
http://technet.microsoft.com/en-us/library/ms190439.aspx
Also see this for reference:
SQL Server: Effects of using 'WITH RECOMPILE' in proc definition?
Sample Code:
CREATE PROCEDURE [dbo].[sp_mystoredproc] (#param1 varchar(20) ,#param2 int)
WITH RECOMPILE
AS
... proc code ...

SQL (or linq to sql) works slower when called from winforms than when called on web application?

We ran into strange sql / linq behaviour today:
We used to use a web application to perform some intensive database actions on our system. Recently we moved to a winforms interface for various reasons.
We found out that performance has seriously decreased: an action that used to take about 15 minutes now takes as long as one whole hour. The strange thing is that It's the exact same method being called. The method performs quite a bit of read / write using linq2sql, and profiling on the client machine showed that the problematic section is on the SQL action itself, in the linq's "Save" method.
The only difference between the cases is that on one case the method is called from a web application's code behind (MVC in this case), and on the other from a windows form.
The one idea I could come up with is that SQL performance has something to do with the identity of the user accessing the db, but I could not find any support for that assumption.
Any ideas?
Did you run both tests from the same machine? If not hardware differences could be the issue... or network... one could be in a higher speed section of your network... like in the same vlan as the sql server. Try running the client code on the same server the web app was running on.
Also if your app is updating progress in a sycronous manner the app could be waiting a long time for display to update... as apposed to working with a stream ala response.write.
If you are actually outputting progress as you go you should make sure that the progress updates are events and that the display of those happens on another thread so that the processing isn't waiting on display. Actually you probably should put the processing on its own thread... and just have an event handler take care of the updates... that is a whole different discussion. The point is that your app could be waiting to update the display of progress.
It's a very old issue but I happened to run into the question just now. So for whom is may concern nowadays, the solution (and there-before the problem) was frustratingly silly. Linq2SQL was configured on the dev machines to constantly write a log to console.
This was causing a huge delay due to the simple act of outputing large amount of text to the console. On the web server the log was not being written, and therefore - no performance drawback. There was a colossal face-palming once we figured this one out. Thanks for the helpers, I hope this answer will help someone solve it faster next time.
Unattended logging. That was the problem.

Categories