How to build and run a Unity project when ruining unit test? - c#

I'm making a multiplayer game and to test it I need to load a specific scene in both the unit tester and standalone simultaneously.
Using constants, One Client would act as a sender, while the other acting as a receiver. I would test the received data against predefined data.
I tried doing the following test script but got the following errors.
private const string SCENE_PATH = "Assets/Tests/PlayMode/Assets/Scenes/Scene.unity";
private const string BUILD_PATH = "Build/Testing.exe";
[OneTimeSetUp]
public void OneTimeSetUp()
{
Debug.Log("OneTimeSetUp Base");
string[] scenes = { SCENE_PATH };
BuildPipeline.BuildPlayer(scenes, BUILD_PATH, BuildTarget.StandaloneWindows64, BuildOptions.None);
}

Just run build and run and then run the tests.

Related

Is it possible to build your TestCaseSource list inside SetUp using NUnit?

Can I build my TestCaseData list in my SetUp? Because with this setup my test is just being skipped. Other regular tests are running just fine.
[TestFixture]
public class DirectReader
{
private XDocument document;
private DirectUblReader directReader;
private static UblReaderResult result;
private static List<TestCaseData> rootElementsTypesData = new List<TestCaseData>();
[SetUp]
public void Setup()
{
var fileStream = ResourceReader.GetScenario("RequiredElements_2_1.xml");
document = XDocument.Load(fileStream);
directReader = new DirectUblReader();
result = directReader.Read(document);
// Is this allowed?
rootElementsTypesData.Add(new TestCaseData(result.Invoice.Id, new IdentifierType()));
rootElementsTypesData.Add(new TestCaseData(result.Invoice.IssueDate, new IdentifierType()));
}
[Test, TestCaseSource(nameof(rootElementsTypesData))]
public void Expects_TypeOfObject_ToBeTheSameAs_InputValue(object inputValue, object expectedTypeObject)
{
Assert.That(inputValue, Is.TypeOf(expectedTypeObject.GetType()));
}
}
As stated by #IMil, the answer is No... that's not possible.
TestCaseSource is used by NUnit to build a list of the tests to be run. It associates a method with a particular set of arguments. NUnit then creates an internal representation of all your tests.
OTOH SetUp (and even OneTimeSetUp is used when those tests are being run. By that time, the number of tests and the actual arguments to each of them are fixed nothing can change them.
So, in order to do what you seem to want to do, your TestCaseSource has to stand on it's own, fully identifying the arguments to be used for the test. That's why NUnit gives you the capability of making the source a method or property, rather than just a simple list.
In your case, I suggest something like...
private static IEnumerable<TestCaseData> RootElementsTypesData()
{
var fileStream = ResourceReader.GetScenario("RequiredElements_2_1.xml");
document = XDocument.Load(fileStream);
directReader = new DirectUblReader();
result = directReader.Read(document);
yield return new TestCaseData(result.Invoice.Id, new IdentifierType()));
yield return new TestCaseData(result.Invoice.IssueDate, new IdentifierType()));
}
Obviously, this is only "forum code" and you'll have to work with it to get something that actually compiles and works for your case.
No, this is impossible.
Methods decorated with [SetUp] are run before each test case.
This means NUnit will first build list of test cases, then run Setup() before each of them.
Therefore, your Setup() never gets called, and list of test cases remains empty.

C# Unit test not waiting for app to close before proceeding to next unit test class

Using MSTest I have six unit test classes. I need an app (SolidWorks) to start from scratch for each test. No problem, I have the following in my unit test class to start and stop SolidWorks before the class starts and after the class finishes:
[ClassInitialize()]
public static void MyClassInitialize(TestContext testContext)
{
var progId = "SldWorks.Application";
var progType = System.Type.GetTypeFromProgID(progId);
app = System.Activator.CreateInstance(progType) as ISldWorks;
app.Visible = true;
DocumentSpecification documentSpecification;
documentSpecification = (DocumentSpecification)app.GetOpenDocSpec(fullFilePath);
documentSpecification.DocumentType = (int)swDocumentTypes_e.swDocPART;
var result = app.OpenDoc7(documentSpecification);
if (result == null)
{
throw new Exception("Couldn't load SOlidworks test file.");
}
}
[ClassCleanup()]
public static void MyClassCleanup()
{
if (app != null)
{
app.CloseAllDocuments(true);
app.ExitApp();
}
}
My problem is that when UnitTestClass1 gets finished UnitTestClass2 starts before SolidWorks is finished closing. UnitTestClass2 seems to be grabbing the SolidWorks process that was started in UnitTestClass1. I tried adding Thread.Sleep:
[ClassCleanup()]
public static void MyClassCleanup()
{
if (app != null)
{
app.CloseAllDocuments(true);
app.ExitApp();
Thread.Sleep(5000);
}
}
That didn't seem to work.
I tried a while loop to wait for Solidworks to close and that seems to hang.
If I run each unit test class one by one they all run fine and pass. Run as a group some of them fail because they are testing against the wrong environment (the file from the last test is still open).
How to make sure the next unit test doesn't run before the app is closed from the last unit test?
The behavior I want is for SolidWorks to run at the beginning of the class, run all the tests in the class and then for SolidWorks to close at the end of the class. I want this to happen for each class.

Is it possible to run same specflow test again based on outcome?

I'm using Specflow, Visual studio 2015 and Nunit. I need if a test fails to run it once again. I have
[AfterScenario]
public void AfterScenario1()
{
if (Test Failed and the counter is 1)
{
StartTheLastTestOnceAgain();
}
}
How do I start the last test again?
In NUnit there is the RetryAttribute (https://github.com/nunit/docs/wiki/Retry-Attribute) for that. It looks like that the SpecFlow.Retry plugin is using that (https://www.nuget.org/packages/SpecFlow.Retry/). This plugin is a 3rd party plugin and I did not used it yet. So no guarantee that this works as you want.
As alternative you could use the SpecFlow+Runner (http://www.specflow.org/plus/). This specialized runner has the option to rerun your failed test. (http://www.specflow.org/plus/documentation/SpecFlowPlus-Runner-Profiles/#Execution - retryFor/retryCount config value).
Full disclosure: I am one of the developers of SpecFlow and SpecFlow+.
You could always just capture the failure during the assert step and then retry what ever it is that your testing for. Something like:
[Given(#"I'm on the homepage")]
public void GivenImOnTheHomepage()
{
go to homepage...
}
[When(#"When I click some button")]
public void WhenIClickSomeButton()
{
click button...
}
[Then(#"Something Special Happens")]
public void ThenSomethingSpecialHappens()
{
var theRightThingHappened = someWayToTellTheRightThingHappened();
var result = Assert.IsTrue(theRightThingHappened);
if(!result)
{
thenTrySomeStepsAgainHere and recheck result using another assert
}
}

What are good ways to unit test interaction with the filesystem?

I'm working on a simple project more as an exercise in TDD than anything else. The program fetches some images from a web server and saves them as files. For the record, what I am doing (my desired end result) is very similar to this perl script but in C#.
I've got to the point where I need to save the files to disk. I need to make unit tests to mandate the code. I'm not sure how to approach this. I want to be able to verify that the code created the expected files with the expected file name(s) and of course I don't want to touch the file-system at all. I'm not completely new to unit testing and TDD but for some reason I'm really not clear what to do in this situation. I'm sure the answer will be obvious once I've seen it but.... the mysterious place in my brain where code comes from is just not cooperating.
My tools of choice are MSpec and FakeItEasy, but suggestions in any frameworks would be gratefully received. What are sensible approaches to unit testing file system interactions?
What would help here is Dependency Injection. Break up the monolithic download operation into smaller pieces and inject them into the downloader. Declare interfaces for these pieces:
public interface IImageFetcher
{
IEnumerable<Image> FetchImages(string address);
}
public interface IImagePersistor
{
void StoreImage(Image image, string path);
}
With these declarations you can write a downloader class that integrates the whole thing like this:
public class ImageDownloader
{
private IImageFetcher _imageFetcher;
private IImagePersistor _imagePersistor;
// Constructor injection of components
public ImageDownloader(IImageFetcher imageFetcher, IImagePersistor imagePersistor)
{
_imageFetcher = imageFetcher;
_imagePersistor = imagePersistor;
}
public void Download(string source, string destination)
{
var images = _imageFetcher.FetchImages(source);
int i = 1;
foreach (Image img in images) {
string path = Path.Combine(destination, "Image" + i.ToString("000"));
_imagePersistor.StoreImage(img, path);
i++;
}
}
}
Note that ImageDownloader does not know which implementations will be used and how they work.
Now, you can supply a dummy persistor when testing, that stores the filenames in a List<string> for instance, instead of supplying the real one that stores to the file system.
UPDATE
// For testing purposes only.
class DummyImagePersistor
{
public readonly List<string> Filenames = new List<string>();
public void StoreImage(Image image, string path)
{
Filenames.Add(path);
}
}
Testing:
var persistor = new DummyImagePersistor();
var sut = new ImageDownloader(new ImageFetcher(), persistor);
sut.Download("http://myimages.com/images", "C:\Destination");
Assert.AreEqual(10, persistor.Filenames.Count);
...

VS 2010 Load Tests Results with custom counters

I am new on Load Testing (and in general, testing) with visual studio 2010 and I am dealing with several problems.
My question is, is there any way possible, to add a custom test variable on the Load Test Results?
I have the following UnitTest:
[TestMethod]
public void Test()
{
Stopwatch testTimer = new Stopwatch();
testTimer.Start();
httpClient.SendRequest();
testTimer.Stop();
double requestDelay = testTimer.Elapsed.TotalSeconds;
}
This UnitTest is used by many LoadTests and I want to add the requestDelay variable to the Load Test Result so I can get Min, Max and Avg values like all others Load Test Counters (e.g. Test Response Time).
Is that possible?
Using the link from the #Pritam Karmakar comment and the walkthroughs at the end of my post I finally managed to find a solution.
First I created a Load Test Plug-In and used the LoadTestStarting Event to create my Custom Counter Category and add to it all my counters:
void m_loadTest_LoadTestStarting(object sender, System.EventArgs e)
{
// Delete the category if already exists
if (PerformanceCounterCategory.Exists("CustomCounterSet"))
{
PerformanceCounterCategory.Delete("CustomCounterSet");
}
//Create the Counters collection and add my custom counters
CounterCreationDataCollection counters = new CounterCreationDataCollection();
counters.Add(new CounterCreationData(Counters.RequestDelayTime.ToString(), "Keeps the actual request delay time", PerformanceCounterType.AverageCount64));
// .... Add the rest counters
// Create the custom counter category
PerformanceCounterCategory.Create("CustomCounterSet", "Custom Performance Counters", PerformanceCounterCategoryType.MultiInstance, counters);
}
Then, in the LoadTest editor I right-clicked on the Agent CounterSet and selected Add Counters... In the Pick Performance Counters window I chose my performance category and add my counters to the CounterSet so the Load Test will gather their data:
Finally, every UnitTest creates instances of the Counters in the ClassInitialize method and then it updates the counters at the proper step:
[TestClass]
public class UnitTest1
{
PerformanceCounter RequestDelayTime;
[ClassInitialize]
public static void ClassInitialize(TestContext TestContext)
{
// Create the instances of the counters for the current test
RequestDelaytime = new PerformanceCounter("CustomCounterSet", "RequestDelayTime", "UnitTest1", false));
// .... Add the rest counters instances
}
[TestCleanup]
public void CleanUp()
{
RequestDelayTime.RawValue = 0;
RequestDelayTime.EndInit();
RequestDelayTime.RemoveInstance();
RequestDelayTime.Dispose();
}
[TestMethod]
public void TestMethod1()
{
// ... Testing
// update counters
RequestDelayTime.Incerement(time);
// ... Continue Testing
}
}
Links:
Creating Performance Counters Programmatically
Setting Performance Counters
Including unit test variable values in load test results
I think what you actually need is to use:
[TestMethod]
public void Test()
{
TestContext.BeginTimer("mytimer");
httpClient.SendRequest();
TestContext.EndTimer("mytimer");
}
You can find good documentation here.
Interesting question. Never tried this, but I have an idea.
Create 3 class level properties of MAX, MIN and AVG. during each test manipulate those values. And then write all final values once entire load test get executed via Classcleanup or assemblycleanup test attribute. You have to run the load test for 1-2 min and have to see which attribute method get called at the end. You can then print those final values in a flat file in local drive via textwriter.

Categories