Webtest execution time much longer when in load test - c#

I made a simple WebPerformanceTest in VS2012 which is just logging in and doing some basic jobs on the website. When ran, this test is only running for about a second or two.
Now, I made a load test, containing only this one webtest with a Constant load for 5 minutes and the mode is based on the number of virtual users. And here comes the funny part: No matter how much users I assign to this load test the number of tests executed is always the same as the number of users assigned. I also tried raising the load test execution time giving me the same result: one test = 5 minutes per user, whereas it only took the webtest about 1-2 seconds to execute.
Does anyone have any idea why the test is taking way longer (300 times) in the load test? What am I doing wrong here?
Edit: The machine is a Windows Server 2008 R2 with 4 cores # 3.00ghz and 8 GB RAM
Here's some images of the settings:

Notice the Run Duration: 5:00
You've set each test to run for five minutes. So basically it runs through the test then sits their idle for the rest of the time, closes the tests and then reports back to you that it took 5 minutes. You won't see any difference unless you change this setting or create a test that could take longer than 5 minutes to run.

Related

Azure pipeline concurrent (not parallel) tests execution for xunit tests

I am using azure pipeline to execute API tests, the tests are executed serially and I have hit the point where my job runs for over 1 hour - this means agent fails because 1 hour is the maximum job execution time. I have started to read how to execute tests in parallel, and in xunit tests should run in parallel by default when they are not in the same collection. In azure however, following https://learn.microsoft.com/en-us/azure/devops/pipelines/test/parallel-testing-vstest?view=azure-devops, to run tests parallelly it is required to use more than one agent and apparently more than 1 job. There are two cons in my case:
we have limited amount of agents and other people want to use them too, I do not want to occupy more agents than necessary;
I need to configure two firewall for cosmosdb for each job (1 job = 1 agent and each agent has new ip i need to add to firewall), this tasks takes 8 minutes and i need to configure 2 databases (16 minutes), each new job would mean i need to speend additional 16 minutes configuring firewalls.
So my question is - is it possible to run tests concurrent rather than parallel by utilizing same core? I have many thread.sleeps (but in my opinion i use them the correct way as i wait for some result for some time like 60 seconds, but i thread sleep for 2 seconds and every 2 seconds i'm polling in example databse if results are there) - thread.sleep means thread is sleeping and core should be available for other threads to run.
Apparently I have posted question too soon, found out that all I need to do is to set [assembly: CollectionBehavior(MaxParallelThreads = n)] to number I want (source: https://xunit.net/docs/running-tests-in-parallel). Default is number of logical processor and in case of azure agents this number is equal to 1.

What causes the slow down in crawling from a laptop program?

I have a project that I have managed to save from a server that was outsourced and managed to get most of it working on one of the laptops I have at home. It has OS Win 8.1r, VS 2017, SQL Server Express 2017, and I wrote the DLL I use in my app in C# .NET version 4.6.1.
I am currently every night at midnight manually running some stored procs that fill some stat tables due to no MS Agent existing in SQL Server Express, then runs an index maintenance proc that either REBUILDS or DEFRAGS the indexes plus rebuilds stats before I manually restart the BOT from a command prompt at just after midnight.
However I have noticed if I leave the laptop on for 3-5 days the time each run takes to get (on average), 40 races and 5-20 runners per race going through proxies gets slower and slower. I just rebooted now as last night it took from 1am to 11am to crawl, scan with regex to get info and save to the DB races and runners.
However if I look at the CreateDate times I store on every new race I can see a pattern..
Yesterday took 10 hours to do 40 races and runners,
Saturday took 4 hours to do 50 races and runners
Friday 3 hours 49 races
Thursday 5 hours 42 races
Wednesday 5 hours 32 races
Tuesday 1 hour 36 races
Obviously over time more and more races & runners are stored in the DB, so retrieval times from indexes, storage etc gets longer but after a reboot it is quick Harry, I just restarted it tonight, rebuild the indexes then let it go and its already done 7 races in 7 minutes.
Obviously I haven't got a server to put this on, the last attempt resulted in an old boss putting it on a French server that doesn't allow access to online betting sites and my BOT uses the Betfair API.
It runs on my laptop ok apart from
-the speed of getting all races and runners into the DB lengthens over time. The longer I leave it on the longer it takes, despite all the clean up operations I do nightly (Delete old log messages, locks, and rebuild stat tables before a reindex/defrag job).
-For some reason the logfile I output debug messages to for after the fact debugging e.g I look for SQL Errors, Connection Errors, Proxy Issues, RegEx errors and I output this through the console app I am using the DLL with at the moment to a logfile in C:\programdata\myproj\logfile.txt - it has permissions as it writes to the file however once the job is over if I try and open it up in my standard Editor - Editplus, it just opens up a blank document. If I open it in Notepad first I can see all the debug then I can copy n paste it to a blank Editplus document.
It never did this before on my Work PC, Permissions are okay, the file is being written to and I don't get any "permission denied" or other I/O errors when opening the logfile up, it's just empty if I don't open it in Notepad.
So I'd like to know what sort of actions are happening to slow this job down over time that a reboot fixes. I know the old saying we used to get from our techies when we had a bug or issue with our PCs at work "have you tried turning it on and off again" - which does for some reason fix so many issues.
I'd just like to know what sort of issues could be happening to slow it down over days that I could maybe automate a clean up job so it doesn't happen. I used to run the exact same code on my Work PC connected remotely to a server every day for months before forced to do reboots due to Windows Updates. So it never used to do it with my bad practice at work of leaving my PC on all the time.
Is the disk getting fragmented - and why wouldn't it require a disk defrag to solve it after rebooting.
The registry? What could get worse and worse over time that a reboot fixes.
Or is it the fact I am using MS SQL Express 2017 and there is some I/O issue with the files it writes to that slows down over time.
I would just like to be able to leave my laptop on with this BOT running at specific times during the day and not worry about it taking 11 hours to complete the first import job.
It is now 37 mins past, been running for 20 mins and it has imported 15 races and runners, about a quarter of the total, so should be finished in about an hours time tonight, and I have JUST re-started my laptop, nothing else, and it has speeded it up from 10 hours yesterday night?
What could be slowing it down lover time, and can I fix it at all?

How to execute and run millions of unit tests quickly?

How do you execute millions of unit test quickly, meaning 20 to 30 minutes?
Here is the scenario:
You are releasing certain hardware and you have, let's say, 2000 unit tests.
You are releasing new hardware and you have additional 1000 tests for that.
Each new hardware will include tests, but also you have to run and execute every previous one, and the number gets bigger as does execution time.
During development, this is solved by categorizing, using the TestCategory attribute and running only what you need to.
The CI, however, must run every single test. As the number increases, executing time is slower and sometimes times out. The .testrunconfig is already set for parallelTestCount execution, but over time this does not solve the issue permanently.
How would you solve this?
It seems like with each update on Visual Studio 2017, execution time changes. We currently have over 6000 tests, of which 15 to 20% are unit tests, and the rest are integration tests.
The bottleneck seemed to be the CI server itself, running on a single machine. 70% to 80% of the tests are asynchronous, and analysis showed that there are no blocking I/O operations. Besides IO, we do not use databases, caching so there is that.
Now, we are in the process of migrating to Jenkins and using its Parallel Test Executor Plugin to parallelize the tests across multiple nodes instead of a single machine. Initial testing showed that timing for executing 6000+ tests varies from 10 to 15 minutes versus the old CI which took 2 hours or stopped sometimes.

JMeter Load Testing C# Rest APIs - Huge sample times

I'm getting unusual issues when load testing my C# Rest APIs using JMeter Tool.
My API does a few database calls, it selects some data from one table, and returns it and manipulates it and inserts it into other tables.
If I run this API call on one instance, it executes in less than 3 seconds.
1 user, no ramp up = < 3 seconds
However, when running this same API call on 50 users on a 10 second ramp up time then I get the following results:
50 users, 10 second ramp-up period = increased sample time.
Am I configuring JMeter incorrectly, or is something wrong within my API call which is causing this?

Need help optimizing this query

I have a big problem with first run query form my SQL Server CE database.
I already applied this optimization Performance and the Entity Framework, but still first query take about 15 sec to run.
Something that I noticed when I run my application for first time first query take about 15 sec to run.
If I close my application and run again, the first query runs immediately. So if I restart my PC and run application again first query take 15 sec to run.
Overall, after two week research on internet I could not find a good way to solve my problem.
I used ANTS Performance Profiler and I noticed my first query takes about 11 sec and form initialization for each page take 4 sec for first time.
I have some questions:
I want to know which resource loaded to Ram when my application start?
why my application is fast in second time run?
how can I load those resource into Ram before application start?
why these resource Remains in Ram until windows restart?
Maybe 15 sec is good but when I run my application from DVD it take 45 sec to run first query.
Edited
I used multiple database for each section of my application.
for example This query Take 11 sec to run form first time:
public void GetContent(short SubjectID)
{
new QuranOtherEntities(CDataBase.CSQuranOtherEntities))
{
CHtmlDesign.HtmlFile = QODB.AdabTbls.First(data => data.ID == SubjectID).Content;
}
}
Table Structure
First(data => data.ID == SubjectID)
Ok, forget entity framework. Grab the generated SQL code, run it through profiler.
This SMELLS like "I have no clue what an index is". Check "Use-the-index-luke.com".
Also make sure that
Maybe 15 sec is good but when I run my application from DVD it take 45 sec to run first
query
Actually IS the sql statement. It could easily be loading and initializing your application, in which case you ask a totally irrelevant question here. In this case there is no a lot you CAN do - there still are optimizations etc., but youhave to ask specific questions here, i.e. do your homework and this question here is then totally irrelevant. This includes, btw, the time it takes entity framework to initialize - which is not related to the query per se, but can happen on the first query you run at all.

Categories