VS 2010 Load Tests Results with custom counters - c#

I am new on Load Testing (and in general, testing) with visual studio 2010 and I am dealing with several problems.
My question is, is there any way possible, to add a custom test variable on the Load Test Results?
I have the following UnitTest:
[TestMethod]
public void Test()
{
Stopwatch testTimer = new Stopwatch();
testTimer.Start();
httpClient.SendRequest();
testTimer.Stop();
double requestDelay = testTimer.Elapsed.TotalSeconds;
}
This UnitTest is used by many LoadTests and I want to add the requestDelay variable to the Load Test Result so I can get Min, Max and Avg values like all others Load Test Counters (e.g. Test Response Time).
Is that possible?

Using the link from the #Pritam Karmakar comment and the walkthroughs at the end of my post I finally managed to find a solution.
First I created a Load Test Plug-In and used the LoadTestStarting Event to create my Custom Counter Category and add to it all my counters:
void m_loadTest_LoadTestStarting(object sender, System.EventArgs e)
{
// Delete the category if already exists
if (PerformanceCounterCategory.Exists("CustomCounterSet"))
{
PerformanceCounterCategory.Delete("CustomCounterSet");
}
//Create the Counters collection and add my custom counters
CounterCreationDataCollection counters = new CounterCreationDataCollection();
counters.Add(new CounterCreationData(Counters.RequestDelayTime.ToString(), "Keeps the actual request delay time", PerformanceCounterType.AverageCount64));
// .... Add the rest counters
// Create the custom counter category
PerformanceCounterCategory.Create("CustomCounterSet", "Custom Performance Counters", PerformanceCounterCategoryType.MultiInstance, counters);
}
Then, in the LoadTest editor I right-clicked on the Agent CounterSet and selected Add Counters... In the Pick Performance Counters window I chose my performance category and add my counters to the CounterSet so the Load Test will gather their data:
Finally, every UnitTest creates instances of the Counters in the ClassInitialize method and then it updates the counters at the proper step:
[TestClass]
public class UnitTest1
{
PerformanceCounter RequestDelayTime;
[ClassInitialize]
public static void ClassInitialize(TestContext TestContext)
{
// Create the instances of the counters for the current test
RequestDelaytime = new PerformanceCounter("CustomCounterSet", "RequestDelayTime", "UnitTest1", false));
// .... Add the rest counters instances
}
[TestCleanup]
public void CleanUp()
{
RequestDelayTime.RawValue = 0;
RequestDelayTime.EndInit();
RequestDelayTime.RemoveInstance();
RequestDelayTime.Dispose();
}
[TestMethod]
public void TestMethod1()
{
// ... Testing
// update counters
RequestDelayTime.Incerement(time);
// ... Continue Testing
}
}
Links:
Creating Performance Counters Programmatically
Setting Performance Counters
Including unit test variable values in load test results

I think what you actually need is to use:
[TestMethod]
public void Test()
{
TestContext.BeginTimer("mytimer");
httpClient.SendRequest();
TestContext.EndTimer("mytimer");
}
You can find good documentation here.

Interesting question. Never tried this, but I have an idea.
Create 3 class level properties of MAX, MIN and AVG. during each test manipulate those values. And then write all final values once entire load test get executed via Classcleanup or assemblycleanup test attribute. You have to run the load test for 1-2 min and have to see which attribute method get called at the end. You can then print those final values in a flat file in local drive via textwriter.

Related

Is it possible to build your TestCaseSource list inside SetUp using NUnit?

Can I build my TestCaseData list in my SetUp? Because with this setup my test is just being skipped. Other regular tests are running just fine.
[TestFixture]
public class DirectReader
{
private XDocument document;
private DirectUblReader directReader;
private static UblReaderResult result;
private static List<TestCaseData> rootElementsTypesData = new List<TestCaseData>();
[SetUp]
public void Setup()
{
var fileStream = ResourceReader.GetScenario("RequiredElements_2_1.xml");
document = XDocument.Load(fileStream);
directReader = new DirectUblReader();
result = directReader.Read(document);
// Is this allowed?
rootElementsTypesData.Add(new TestCaseData(result.Invoice.Id, new IdentifierType()));
rootElementsTypesData.Add(new TestCaseData(result.Invoice.IssueDate, new IdentifierType()));
}
[Test, TestCaseSource(nameof(rootElementsTypesData))]
public void Expects_TypeOfObject_ToBeTheSameAs_InputValue(object inputValue, object expectedTypeObject)
{
Assert.That(inputValue, Is.TypeOf(expectedTypeObject.GetType()));
}
}
As stated by #IMil, the answer is No... that's not possible.
TestCaseSource is used by NUnit to build a list of the tests to be run. It associates a method with a particular set of arguments. NUnit then creates an internal representation of all your tests.
OTOH SetUp (and even OneTimeSetUp is used when those tests are being run. By that time, the number of tests and the actual arguments to each of them are fixed nothing can change them.
So, in order to do what you seem to want to do, your TestCaseSource has to stand on it's own, fully identifying the arguments to be used for the test. That's why NUnit gives you the capability of making the source a method or property, rather than just a simple list.
In your case, I suggest something like...
private static IEnumerable<TestCaseData> RootElementsTypesData()
{
var fileStream = ResourceReader.GetScenario("RequiredElements_2_1.xml");
document = XDocument.Load(fileStream);
directReader = new DirectUblReader();
result = directReader.Read(document);
yield return new TestCaseData(result.Invoice.Id, new IdentifierType()));
yield return new TestCaseData(result.Invoice.IssueDate, new IdentifierType()));
}
Obviously, this is only "forum code" and you'll have to work with it to get something that actually compiles and works for your case.
No, this is impossible.
Methods decorated with [SetUp] are run before each test case.
This means NUnit will first build list of test cases, then run Setup() before each of them.
Therefore, your Setup() never gets called, and list of test cases remains empty.

Profiling C# application without using external tools

I have a C# (dotnet core) application that gets input from the queue and processes it. The processing time is determined by many factors. One of them is the request itself.
My code looks something like this:
public static void main()
{
DoWork(new TaskInformation("abc"));
}
public static void DoWork(TaskInformation input)
{
InnerWork1(input);
InnerWork2(input);
}
private static void InnerWork1(TaskInformation input)
{
// Running code here.
Thread.Sleep(1000);
}
private static void InnerWork2(TaskInformation input)
{
// Running code here.
Thread.Sleep(2000);
}
In order to improve the performance of my application, I want to develop a small tool that will run and mesure execution time. If the time is above some threshold - it will do something.
So, what I want to do is "wrap" my inner functions (InnerWork1\2) with Stopwatch automatically - without doing it programmatically.
The way I'll turn on this profiling feature should be something like:
public static void main()
{
RunWithProfiling(DoWork, new TaskInformation("abc"));
}
So, in this case the expected profiling result will be:
DoWork - took 3 seconds
InnerWork1 - took 1 second
InnerWork2 - took 2 second
Questions:
The question is if it possible to implement a mechanism like this? Maybe with reflection?
Is this possible to run a C# profiler from code? Something like "the code will profile itself".
You can use a delegate to pass your method to a RunWithProfiling method, in my example I will use Action, then measure its invoke time with Stopwatch:
private static void RunWithProfiling(Action method)
{
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
method.Invoke();
stopwatch.Stop();
Console.WriteLine($"{method.Method.Name} - took {stopwatch.Elapsed.Seconds} second(s).");
}
and usage
public static void DoWork()
{
RunWithProfiling(InnerWork1);
RunWithProfiling(InnerWork2);
}
You can for example return long with RunWithProfiling method, then sum results up to get whole execution time.

Is it possible to run same specflow test again based on outcome?

I'm using Specflow, Visual studio 2015 and Nunit. I need if a test fails to run it once again. I have
[AfterScenario]
public void AfterScenario1()
{
if (Test Failed and the counter is 1)
{
StartTheLastTestOnceAgain();
}
}
How do I start the last test again?
In NUnit there is the RetryAttribute (https://github.com/nunit/docs/wiki/Retry-Attribute) for that. It looks like that the SpecFlow.Retry plugin is using that (https://www.nuget.org/packages/SpecFlow.Retry/). This plugin is a 3rd party plugin and I did not used it yet. So no guarantee that this works as you want.
As alternative you could use the SpecFlow+Runner (http://www.specflow.org/plus/). This specialized runner has the option to rerun your failed test. (http://www.specflow.org/plus/documentation/SpecFlowPlus-Runner-Profiles/#Execution - retryFor/retryCount config value).
Full disclosure: I am one of the developers of SpecFlow and SpecFlow+.
You could always just capture the failure during the assert step and then retry what ever it is that your testing for. Something like:
[Given(#"I'm on the homepage")]
public void GivenImOnTheHomepage()
{
go to homepage...
}
[When(#"When I click some button")]
public void WhenIClickSomeButton()
{
click button...
}
[Then(#"Something Special Happens")]
public void ThenSomethingSpecialHappens()
{
var theRightThingHappened = someWayToTellTheRightThingHappened();
var result = Assert.IsTrue(theRightThingHappened);
if(!result)
{
thenTrySomeStepsAgainHere and recheck result using another assert
}
}

How to find places in code that must be covered with unit tests

I know it's not so good to write tests after you actually wrote code. I'm unit-testing newbie and feel that unit-testing may deliver many good advantages so I obsessed with an idea to cover as much as possible.
For instance, let we have this code:
public class ProjectsPresenter : IProjectsViewObserver
{
private readonly IProjectsView _view;
private readonly IProjectsRepository _repository;
public ProjectsPresenter(IProjectsRepository repository, IProjectsView view)
{
_view = view;
_repository = repository;
Start();
}
public void Start()
{
_view.projects = _repository.FetchAll();
_view.AttachPresenter(this);
}
}
So looking on code above could you answer me what tests typically I should write on that piece of the code above?
I'm rolling on write tests on constructor to make sure that repository's FetchAll was called and on the view site AttachPresenter is called.
POST EDIT
Here is a my view interface:
public interface IProjectsView
{
List<Project> projects { set; }
Project project { set; }
void AttachPresenter(IProjectsViewObserver projectsPresenter);
}
Here is a view:
public partial class ProjectsForm : DockContent, IProjectsView
{
private IProjectsViewObserver _presenter;
public ProjectsForm()
{
InitializeComponent();
}
public Project project
{
set
{
listBoxProjects.SelectedItem = value;
}
}
public List<Project> projects
{
set
{
listBoxProjects.Items.Clear();
if ((value != null) && (value.Count() > 0))
listBoxProjects.Items.AddRange(value.ToArray());
}
}
public void AttachPresenter(IProjectsViewObserver projectsPresenter)
{
if (projectsPresenter == null)
throw new ArgumentNullException("projectsPresenter");
_presenter = projectsPresenter;
}
private void listBoxProjects_SelectedValueChanged(object sender, EventArgs e)
{
if (_presenter != null)
_presenter.SelectedProjectChanged((Project)listBoxProjects.SelectedItem);
}
}
POST EDIT #2
This is how I test interaction with repository. Is everything allright?
[Test]
public void ProjectsPresenter_RegularProjectsProcessing_ViewProjectsAreSetCorrectly()
{
// Arrange
MockRepository mocks = new MockRepository();
var view = mocks.StrictMock<IProjectsView>();
var repository = mocks.StrictMock<IProjectsRepository>();
List<Project> projList = new List<Project> {
new Project { ID = 1, Name = "test1", CreateTimestamp = DateTime.Now },
new Project { ID = 2, Name = "test2", CreateTimestamp = DateTime.Now }
};
Expect.Call(repository.FetchAll()).Return(projList);
Expect.Call(view.projects = projList);
Expect.Call(delegate { view.AttachPresenter(null); }).IgnoreArguments();
mocks.ReplayAll();
// Act
ProjectsPresenter presenter = new ProjectsPresenter(repository, view);
// Assert
mocks.VerifyAll();
}
I know it's not so good to write tests after you actually wrote code
It's better than not writing tests at all.
Your method works with two external components and that interaction should be verified (in addition to mentioned arguments validation). Checking whether FetchAll was called gives you no value (or checking it returns something - this belongs to ProjectsRepository tests itself) - you want to check that view's projects are set (which will indirectly check whether FetchAll was called). Tests you need are:
verify that view projects are set to expected value
verify that presenter is attached
validate input arguments
Edit: example of how you would test first case (projects are set)
// "RegularProcessing" in test name feels a bit forced;
// in such cases, you can simply skip 'conditions' part of test name
public void ProjectsPresenter_SetsViewProjectsCorrectly()
{
var view = MockRepository.GenerateMock<IProjectView>();
var repository = MockRepository.GenerateMock<IProjectsRepository>();
// Don't even need content;
// reference comparison will be enough
List<Project> projects = new List<Project>();
// We use repository in stub mode;
// it will simply provide data and that's all
repository.Stub(r => r.FetchAll()).Return(projects);
view.Expect(v => v.projects = projects);
ProjectsPresenter presenter = new ProjectsPresenter(repository, view);
view.VerifyAllExpecations();
}
In second case, you'll set expectations on view that its AttachPresenter is called with valid object:
public void ProjectsPresenter_AttachesPresenterToView()
{
// Arrange
var view = MockRepository.GenerateMock<IProjectView>();
view.Expect(v => v.AttachPresenter(Arg<IProjectsViewObserver>.Is.Anything));
var repository = MockRepository.GenerateMock<IProjectsRepository>();
// Act
var presenter = new ProjectsPresenter(repository, view);
// Assert
view.VerifyAllExpectations();
}
I would add simple tests at start , like:
null reference checks
FetchAll() returns any value
Do not add a lot of test code at first time, but refine them after as your dev code mutates.
I would add tests for exception e.g. ArgumentException, corner cases and usual ones of FetchAll().
PS. Does start have to be public?
Pex is an interesting tool that's worth checking out. It can generate suites of unit tests with high code coverage: http://research.microsoft.com/en-us/projects/pex/. It doesn't replace your own knowledge of your code - and which test scenarios are more important to you than others - but it's a nice addition to that.
The purpose of writing tests before writing production code is first and foremost to put you (the developer) in the mind-set of "how will i know when my code works?" When your development is focused on what the result of what working code will look like and not the code itself you are focused on the actual business value obtained by your code and not extraneous concerns (millions of man hours have been spent building and maintaining features users never asked for, wanted, or needed). When you do this you are doing "test driven development".
If you are doing pure TDD the answer is 100% code coverage. That is you do not write a single line of production code that is not already covered by a line of unit test code.
In Visual Studio if you go to Test->Analyze Code Coverage it will show you all of the lines of code that you do not have covered.
Practically speaking its not always feasible to enforce 100% code coverage. Also there are some lines of code that are much more important then others. Determining which depends once again on the business value provided by each line and the consequence of that line failing. Some lines (like logging) may have a smaller consequence then others.

How to rollback in EF4 for Unit Test TearDown?

In my research about rolling back transactions in EF4, it seems everybody refers to this blog post or offers a similar explanation. In my scenario, I'm wanting to do this in a unit testing scenario where I want to rollback practically everything I do within my unit testing context to keep from updating the data in the database (yeah, we'll increment counters but that's okay). In order to do this, is it best to follow the following plan? Am I missing some concept or anything else major with this (aside from my SetupMyTest and PerformMyTest functions won't really exist that way)?
[TestMethod]
public void Foo
{
using (var ts = new TransactionScope())
{
// Arrange
SetupMyTest(context);
// Act
PerformMyTest(context);
var numberOfChanges = context.SaveChanges(SaveOptions.AcceptAllChangesAfterSave);
// if there's an issue, chances are that an exception has been thrown by now.
// Assert
Assert.IsTrue(numberOfChanges > 0, "Failed to _____");
// transaction will rollback because we do not ever call Complete on it
}
}
We use TransactionScope for this.
private TransactionScope transScope;
#region Additional test attributes
//
// Use TestInitialize to run code before running each test
[TestInitialize()]
public void MyTestInitialize()
{
transScope = new TransactionScope();
}
// Use TestCleanup to run code after each test has run
[TestCleanup()]
public void MyTestCleanup()
{
transScope.Dispose();
}
This will rollback any changes made in any of the tests.

Categories