What causes the slow down in crawling from a laptop program? - c#

I have a project that I have managed to save from a server that was outsourced and managed to get most of it working on one of the laptops I have at home. It has OS Win 8.1r, VS 2017, SQL Server Express 2017, and I wrote the DLL I use in my app in C# .NET version 4.6.1.
I am currently every night at midnight manually running some stored procs that fill some stat tables due to no MS Agent existing in SQL Server Express, then runs an index maintenance proc that either REBUILDS or DEFRAGS the indexes plus rebuilds stats before I manually restart the BOT from a command prompt at just after midnight.
However I have noticed if I leave the laptop on for 3-5 days the time each run takes to get (on average), 40 races and 5-20 runners per race going through proxies gets slower and slower. I just rebooted now as last night it took from 1am to 11am to crawl, scan with regex to get info and save to the DB races and runners.
However if I look at the CreateDate times I store on every new race I can see a pattern..
Yesterday took 10 hours to do 40 races and runners,
Saturday took 4 hours to do 50 races and runners
Friday 3 hours 49 races
Thursday 5 hours 42 races
Wednesday 5 hours 32 races
Tuesday 1 hour 36 races
Obviously over time more and more races & runners are stored in the DB, so retrieval times from indexes, storage etc gets longer but after a reboot it is quick Harry, I just restarted it tonight, rebuild the indexes then let it go and its already done 7 races in 7 minutes.
Obviously I haven't got a server to put this on, the last attempt resulted in an old boss putting it on a French server that doesn't allow access to online betting sites and my BOT uses the Betfair API.
It runs on my laptop ok apart from
-the speed of getting all races and runners into the DB lengthens over time. The longer I leave it on the longer it takes, despite all the clean up operations I do nightly (Delete old log messages, locks, and rebuild stat tables before a reindex/defrag job).
-For some reason the logfile I output debug messages to for after the fact debugging e.g I look for SQL Errors, Connection Errors, Proxy Issues, RegEx errors and I output this through the console app I am using the DLL with at the moment to a logfile in C:\programdata\myproj\logfile.txt - it has permissions as it writes to the file however once the job is over if I try and open it up in my standard Editor - Editplus, it just opens up a blank document. If I open it in Notepad first I can see all the debug then I can copy n paste it to a blank Editplus document.
It never did this before on my Work PC, Permissions are okay, the file is being written to and I don't get any "permission denied" or other I/O errors when opening the logfile up, it's just empty if I don't open it in Notepad.
So I'd like to know what sort of actions are happening to slow this job down over time that a reboot fixes. I know the old saying we used to get from our techies when we had a bug or issue with our PCs at work "have you tried turning it on and off again" - which does for some reason fix so many issues.
I'd just like to know what sort of issues could be happening to slow it down over days that I could maybe automate a clean up job so it doesn't happen. I used to run the exact same code on my Work PC connected remotely to a server every day for months before forced to do reboots due to Windows Updates. So it never used to do it with my bad practice at work of leaving my PC on all the time.
Is the disk getting fragmented - and why wouldn't it require a disk defrag to solve it after rebooting.
The registry? What could get worse and worse over time that a reboot fixes.
Or is it the fact I am using MS SQL Express 2017 and there is some I/O issue with the files it writes to that slows down over time.
I would just like to be able to leave my laptop on with this BOT running at specific times during the day and not worry about it taking 11 hours to complete the first import job.
It is now 37 mins past, been running for 20 mins and it has imported 15 races and runners, about a quarter of the total, so should be finished in about an hours time tonight, and I have JUST re-started my laptop, nothing else, and it has speeded it up from 10 hours yesterday night?
What could be slowing it down lover time, and can I fix it at all?

Related

Start a program (c# and mysql Database) simultaneous on several computers

I have to start a program on several computers at the same time (within ms).
The program read out of and write into a mysql database. To ensure the simultaneous start, I thought about the sql dependency but does it also work for mysql? Are there any other ways to guarantee the synchronous start?
It´s a project inside a laboratory. So it´s just a small network.
Greetz
Synchronyze Operating System clocks with and NTP and schedule the start (once loaded) to ensure real-time synchrony.
Run your program with a clocktime to start, and put some trigger to do it.
I'm working for years in distributed systems and is the most common solution.
Accuracy on todays clocks starts on nanosecs (10^-6 secs) and there's high accuracy clocks (10^-7 till 10^-8 secs)
You can read a discussion about clocks here and Time synchronization vs alternatives here

Why is VS 2015 stopping diagnostics session is taking forever?

I am trying to analyze a WPF project (WPF, .NET 4.6.1, EF 6, Moq., on a i5 machine with W10 64 bit) using the performance profiler with only "Timeline" activated.
Problem is that on stopping the program I am stuck in the "Report.....diagsession" tab with the message "Microsoft Visual Studio is stopping your diagnostics session" and the rotating hourglass. Some times it just times out, other times I get to the report eventually, but 5 to 20 minutes later.
Interestingly the time waiting for the diagnostic session to stop is included in the report. It is like the process collecting the data does not get the message to stop recording.
Using Windows Resource Monitor I have noticed VsStandardCollector.exe writing huge amounts of data to a subfolder in "C:\Users\XXX\AppData\Local\Temp\". About 9 Gigabyte in my last try, covering 10 minutes in total while my application only ran for 30 seconds before I stopped it.
Anyone with an idea what could cause the delay in stopping the session?
CPU and disk use is very low during waiting (< 5%)
Recently I've learnt about PerfView tool that is used for performance analysis even inside of Microsoft. It's much cheaper than VisualStudio, in fact it is free.
So you may use it to analyze performance of Visual Studio to answer your question or even better - use it to analyze performance of your own WPF application.
It seems if your project is consumes more than 4 GB, standard profiler is extremely slow and sometimes dead hangs on some unknown invisible internal problem.
I manage to go through this by steps below
Build Release version
Run Profiler with State Paused.
Give VS give more permissions for you debugging account if VS ask for it.
Activate profiler for short period of time.
Close application
Wait 10-20 minutes or more to generated report.

Entity Framework startup slow on everyone's computer but mine

We started developing an application in C# recently and decided to try Entity Framework 6.1.3 with a code-first approach to handle persistence. The data is being stored in a SQL Server Express 2012 instance which is running on a local server. The application is still very small. Currently we are only persisting two POCOs and have two tables in the database: one with 5 fields (about 10 rows of data), and one with 4 fields (2 rows).
We are experiencing a 3 second delay when Entity Framework processes the first query with subsequent queries being extremely quick. After some searching we found that this is normal for Entity Framework so, although it seemed excessive for such a small database, we've been coping with it.
After doing some work today, none of which was specifically related to persistence, suddenly I found that the first query was only taking a quarter of a second to run. I couldn't see anything obvious in the changes I'd made so I uploaded them to source control and asked my colleague to download everything. When he builds and runs it on his computer, it still takes 3 seconds for the first query. I've compiled the application, tried it on two test computers and they experience the initial 3 second delay.
There seems to be no correlation between the problem and the computers/operating systems. My computer is running Windows 7 SP1 x64. My colleague's development computer is running Windows 10 x64. The other two test computers are running Windows 7 SP1 x86. They are all a similar specification (Core i5, 4GB/8GB RAM).
In my research of the delay I found that there are several things you can do to improve performance (pre-generating views etc) and I have done none of this. I haven't installed anything or made any changes to my system, although I suppose it's possible an update was installed in the background. Nothing has changed on the database server or in the database itself or in the POCOs. We are all connecting to the same database on the same server.
This raises a few obviously related questions. If it's possible for it to start up in a quarter of a second, why has it been taking 3 seconds up until now? What happened on my computer to suddenly improve performance and how can I replicate this on the other computers that are still slow?
Can anyone offer any advice please?
EDIT
I turned on logging in Entity Framework to see what queries were being run and how long they were taking. Before the first and, for testing purposes, only query is run, EF runs 3 queries to do with migration. The query generated to retrieve my data follows that and is as follows:
SELECT
[Extent1].[AccountStatusId] AS [AccountStatusId],
[Extent1].[Name] AS [Name],
[Extent1].[Description] AS [Description],
[Extent1].[SortOrder] AS [SortOrder]
FROM [dbo].[AccountStatus] AS [Extent1]
-- Executing at 28/01/2016 20:55:16 +00:00
-- Completed in 0 ms with result: SqlDataReader
As you can see it runs really quickly which is hardly surprising considering there are only 2 records in that table. The 3 migration queries and my query take no longer than 5ms to run in total on both my computer and my colleague's computer.
Copying that query in to SSMS and running it from various other machines produces the same result. It's that fast it doesn't register a time taken. It certainly doesn't look like the query causes the delay.
EDIT 2: Screenshots of diagnostic tools
In order to give a good comparison I've altered the code so that the query runs at application start. I've added a red arrow to indicate the point at which the form appears. I hadn't noticed before but when my colleague runs the application the first time after starting Visual Studio, it's about a second quicker. All subsequent times are slower.
1) Colleague's computer - first run after loading Visual Studio
2) Colleague's computer - all subsequent runs
3) My computer - all runs
So every time my colleague runs the application (apart from the first time) there is a second's pause in addition to the usual delay. The first run immediately after starting Visual Studio seems to eliminate this second's pause but it's still no where near the speed on my computer.
Incidentally, there is a normal delay of around a quarter of a second caused by the application starting. If I change the application to require a button click for the first query, the second's pause and usual 2 second delay happen only after the button is clicked.
Another thing of note is the amount of memory the application uses. Most of the time on my computer it will use around 40MB but on the other computer it never seems to use more than 35MB. Could there be some kind of memory optimisation going on that is slowing things down for the other computer? Maybe my computer is loading some additional/cached information in to memory that the others are having to generate. If this is possible, any thoughts on where I might look for this?
EDIT 3
I've been holding off making changes to the model and database because I was worried the delay would come back and I'd not have anything to test against. Just wanted to add that after exhausting all other possibilities, I've tried modifying a POCO and the database and it's still quick on my computer but slow on others.
I've altered the title of this question to more accurately reflect the problem I'm trying to solve.
Query plans in SQL Server can change over time. It may be that your machine has cached a good query plan, while your co workers machine has not. In other words, it may have nothing to do with EF. You could potentially confirm or deny this theory by running the same query by hand in management studio.
In order to tackle performance problems related to EF generated queries, I advice you to use Profiler or an alternative (as Express Editions do not have it) to see how much of your time is actually consumed running the query.
If most of your time is used for running the query, as already suggested by jackmott, you can run it in SSMS by toggling Actual Execution Plan to see the generated plan in each situation.
If time is spent on something else (some C# warming up or something similar), Visual Studio 2015 has builtin performance analysis that can be used to see where it is spending that time.

Quartz.Net problems updating a job, error reporting/monitoring

I have a quartz.net client/server setup to fire some text messages on a schedule using another Third Party library.
Quartz.Server.exe is running as a windows service on my staging and development environments, pointing to jobs on SQL and my website uses Quartz.Simpl.ZeroSizeThreadPool and just schedules the jobs. Everything was working fine until it wasn't.
Apparently something had caused the exe to stop running and even though it was running as a windows service with recovery options set to send me an email when it went down, I did not get the email.
So when I restarted the server 15 days worth of old text messages went out with misleading phrases like "Appointment tomorrow # 9:00 AM" for appointments for 5 days ago.
So now I have updated my IJob code so that the job is discarded if the fire-time is after the appointment time. Since I was using DateTime instead of plain strings I changed "quartz.jobStore.useProperties = false". When I was ready to deploy I realized that this change along with some other changes could break the scheduler so that it wouldn't fire historical triggers from before the changes.
I spiraled down a 12+ hour black hole trying to wrestle with the settings to get my new jobs to fire alongside my old jobs locally and on my staging environment. Tried a million things, including every combination of Quartz running as a service or console pointing to SQL local or on staging. Said potty-words. Then said some prayers.
Started over. Tested to see if the working code on the live server would fire both new and old jobs without testing. It wouldn't. I Changed "quartz.jobStore.useProperties = false" in quartz.config. It worked! So I did a new deployment Monday evening with the changes and everything seemed to be working fine. I did a "Keeping my job dance", and revisited my error logging/recovery setup and created a nightly job in SQL to count number of triggers yesterday, today, and tomorrow each morning.
Here it is Wednesday morning and I check the SQL job I set up to find 150+ triggers from Yesterday have not been handled (everything past 9:00 AM EST). So I go to my live server and the Quartz service is still running. I stop the process and go to my folder where it lives and Right_Click>Run as Admin. Here is the error I get in my log.txt file. And of course now the Windows service fails to start.
I need major help fast! I need a montage, and at the end of the Montage I need to be a quartz.net ninja. Marko Lahma (or anyone) will you "hold my hand" and tell me I am going to be able to keep my job? I need my setup to be bulletproof (be more robust/fail more noticeably).
EDIT - 201404241938Z
Here is my code to check for the new values that broke the old jobs
// I think this next line is causing an error
DateTimeOffset oDateReminder = data.GetDateTimeOffset("reminderTime");
if (oDateReminder.DateTime > DateTime.MinValue)
{
// Do stuff with other "new job" datamap keys
…
// Changing it to…
if (data["reminderTime"] != null)
{
// Do stuff with other "new job" datamap keys
…
EDIT - 201404242205Z
This seems to have worked. I'll wait and award some bounty
As you probably know, the data is now corrupted in the DB containing both JobDataMaps (binary) and NameValueCollections (binary).
Would you like to give a whirl to the latest head of repository master? I've ninjacommitted support for recovering from this mixed data situation. Just take the latest and build with custom snk file and drop the DLL for run.
This is version 2.2.x so take note if you are using something like 2.1 or 2.0.
The issue is tracked here https://github.com/quartznet/quartznet/issues/172
What comes to your original issue one easy fix would to ignore misfires, if SMS notification doesn't fire within expected time then it probably should be ignored and not fired immediately when possible. Logging information might help you get info about what caused the original downtime (windows updates, SQL server not started yet etc).

Webtest execution time much longer when in load test

I made a simple WebPerformanceTest in VS2012 which is just logging in and doing some basic jobs on the website. When ran, this test is only running for about a second or two.
Now, I made a load test, containing only this one webtest with a Constant load for 5 minutes and the mode is based on the number of virtual users. And here comes the funny part: No matter how much users I assign to this load test the number of tests executed is always the same as the number of users assigned. I also tried raising the load test execution time giving me the same result: one test = 5 minutes per user, whereas it only took the webtest about 1-2 seconds to execute.
Does anyone have any idea why the test is taking way longer (300 times) in the load test? What am I doing wrong here?
Edit: The machine is a Windows Server 2008 R2 with 4 cores # 3.00ghz and 8 GB RAM
Here's some images of the settings:
Notice the Run Duration: 5:00
You've set each test to run for five minutes. So basically it runs through the test then sits their idle for the rest of the time, closes the tests and then reports back to you that it took 5 minutes. You won't see any difference unless you change this setting or create a test that could take longer than 5 minutes to run.

Categories