Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am working for a consulting company that develops a lot of Add-Ons for SAP Business One using .NET. I want to introduce TDD (or at least some good unit testing practices) to the company to increase code quality. Here's the problem.
SAP provides a COM object (called Company) that lets you interact with the data in SAP. Company is an interface, so it can be mocked, but the amount of Mocking that would have to be done to get a single test to run is huge! I've tried it out with a few tests, and although they work, I really had to have a good understanding of the internals of the unit that I was testing, in order to create tests that passed. I feel that this very much defeats the purpose of the unit tests (I'm testing the internals as opposed to the interface).
Currently, through the use of dependency injection, I've created a Mock Company object that returns some Mock Documents that will sometimes return Mock values based on different circumstances, just to get the tests to run. Is there a better way? Has anyone been able to effectively unit test code that heavily depends on some external library? Especially when the results of the tests should be some change to that mocked object? (Say, when the add-on runs, the Mock Company object's SaveDocument function should be called with this Mock document).
I know this may be a strange question, but the fact of the matter is that in order to get these unit tests to run well, I feel like the only option to me is to create a really...reaally large mock that handles multiple Mock Documents, knows when to give the documents at the right time, and a lot of other things. It'd be essentially mocking out all of SAP. I don't know if there's some other best practice that others do in these cases.
Thanks in advance!
EDIT: Carl Manaster:
You're probably right. I think the problem is that most of the existing code base is very procedural. A lot of Windows services with a Run() method. I can definitely see how, if the project was structured a bit better, tests could be made with a lot more ease.
But let's say that the company can't invest in refactoring all of these existing projects. Should I just abandon the idea of unit testing these things?
If your methods are short enough, you should be able to mock only the interactions with one entity (Company), without interacting with the entities it returns. What you need is for your method to call (let's say) company.getDocument(). If your method under test has further interactions with the returned document at that point, split out that code, so that you can test that code's interactions with a (mocked) Document, without worrying about the Company in that test. It sounds as though your methods are currently much too long and involved for this kind of approach, but if you whittle away at them to the point where testing one method simply verifies that company.getDocument was called, you will find it much easier to test, much easier to work with, and ultimately much easier to understand.
Update
To your question of whether you should abandon the idea of unit testing: Do you want it to work? Do you have changes to make to the code? If the answers are (as I would assume) affirmative, then you should probably persevere. You don't have to test everything - but test what you're changing. You don't have to refactor everything - but refactor what you're working on so it's easier to test and easier to change. That "whittling away" that I mentioned: do that in service of solving the problems you have at the moment with your code base; over time you will find the code that most needed the tests has been tested - and it's a lot easier to work with because it's well tested and better factored.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I faced with usage of NSubstitute inside business logic(outside of test classed):
var extension = Substitute.For<IExtension>();
I get used to utilization of NSubstitute inside test classes, when you need to mock some class(interface). But using NSubstitute outside of test classes confused me. Is it correct place for it? Is is correct to use NSubstitute like dependency injection container, that can create of instance of interface/class?
My concerns are that NSubstitute was designed to be used for tests. Performance inside tests is not very important thing, that is why it could be slow. Also, it relies on reflection, so it could not be very quick. But, is performance of NSubstitute poor, or is it ok?
Are there any other reasons, why NSubstitute or other mocking libraries should not be used outside of tests?
No, it is not generally good practice to use a mocking library in production code. (I'm going to use "generally" a lot here as I think any question on "good practice" will require a degree of generalisation. People may be able to come up with cases that work against this generalisation, but I think those cases will be the vast minority.)
Even without performance considerations, mocking libraries create test implementations for interfaces/classes. Test implementations generally support functions such as recording calls made and stubbing specific calls to return specific results. Generally when we have an interface or class for production code, it is to achieve some specific purpose, not the general purpose of recording calls and stubbing return values.
While it would be possible to provide a specific implementation for an interface using NSubstitute and to stub each call to execute production code logic, why not instead create a class with the required implementations instead?
This will generally have these advantages:
should be more succinct to implement (if not, consider switching to a better language! :D)
uses the native constructs of your programming language
should have better performance (removes levels of indirection required for mocking library)
For NSubstitute specifically there are some big reasons why you should never use it in production code. Because the library is designed for test code it uses some approaches that are unacceptable for production code:
It uses global state to support its syntax.
It abuses C#/VB syntax for the purpose of testing (can almost be considered a testing DSL). e.g. say sub.MyCall() returns an int. Stubbing a call like sub.MyCall().Returns(42) means we are calling int.Returns(42), which is now somehow going to influence a return value of a call outside of the int on which it is being called. This is quite different to how C#/VB generally works.
It requires virtual members for everything. This constraint is shared by many mocking libraries. For NSubstitute, you can get unpredictable results if you use it with non-virtual members. Unpredictability is not a nice thing to have for production code.
Tests are generally short-lived. NSubstitute (and probably other libraries) can make implementation decisions that rely on short-lived objects.
Tests show pass or failure for a particular case. This means if there is a problem in NSubstitute it can be immediately picked up while attempting to write and run a specific test. While a lot of effort goes into NSubstitute quality and reliability, the amount of work and scrutiny the C# compiler goes through is a completely different level. For production code, we want a very stable base to use as a foundation.
In summary: your programming language provides constructs designed and optimised for implementing logical interfaces. Your mocking library provides constructs designed and optimised for the much more limited task of providing test implementations of logical interfaces for use with testing code in isolation from its dependencies. Unless you have an ironclad reason as to why you would do the programming equivalent of digging a hole with a piece of paper instead of a shovel, I'd suggest using each tool for its intended purpose. :)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am looking to run Nunit test but I do not want the tests to be data dependent.
For eg: If I am running my unit test on testing server referring to testing database and if some user changes database values;
it should not have an impact on my testing scenarios.
However I want my testing scenarios to refer to oracle Stored procedures.
Thanks....any help would be highly appreciated.
Also I am open to the idea of any other tool which has the ability to achieve this.
If you are really hitting the database this not a unit test but integration test.
Basically you have two options which one with it's pros and cons:
Keep with the idea of integration tests but ensure somehow that the data you are using is as you expected. This can be achieved using stored procedure in your testing database that recreate your data while calling it, you can call this procedure in your tests initialization and then do all of your testing. The main disadvantage here is that the test will take more time than unit test and will cost more resources.
The main advantage is that you can be sure you're code integrates well with your database.
Choosing to use a real unit tests, in this option you will not going to use the database at all but instead create in-memory objects that represents the data from your database.
Because you will create this objects in the arrange part of your unit test you can know extacly what data they are holding.
The main disadvantage here is that you can't be sure you're code integrates well with your database.
The main advantage is that the test will take less time than integeration test and will cost less resources, moreover your test can be run even if your testing database is down.
If you want you can actually choose to use both options, this is useful because each test is testing your code from a different perspective.
More about unit tests vs integeration tests can be found here.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have a test suite which works against a dedicated copy of the real database. The application creates a complex object by filling it from the database and it would be a lot of work do create one "manually" or mocking one to bring it to a valid state. So I ran a database query from the tests in order to have a valid object (not to verify that I integrate with the database correctly). It was working blazing fast. Especially after the first call MSSQL was caching it and the query ran in less than 1 ms.
Are there any arguments why I should avoid doing this? If it's because speed when the database is on the same network it's working fast? It seems that most literature out there wouldn't recommend doing this - but why?
EDIT - To answer my own question: "unit tests" means that each test is autonomous, if you touch the database one test could modify it and affect another test. Even though transactions can solve it, it's still not quite in the "spirit" of "unit tests" and make them a bit cumbersome. So this should be avoided but not under all circumstances, if I have no choice I'll access the database in transaction which will make sure it won't affect other tests.
This seems to be a principle some people follow - that you should never hit the database - but my experience is that sometimes, trying too hard to avoid the database creates these giant tests, over-use of mocks, or a strange and brittle data access interface. Search about for test-induced design damage for more on this idea.
For my part, I'm happy to access a database as part of tests. You can often do write tests if you can wrap the whole test in a transaction, too.
We split up our tests in unit tests and integration tests/dlls. The unit tests cannot go to the db, the integration tests can.
Keep in mind that having a lot of integration tests can seriously slow your build. I can run all my unit tests in minutes while running the integration tests take over an hour.
You have to define for yourself WHAT you want to test? Do you want a test that simply checks if your API does what it should? Then you can mock even that single database-entity from your tests and test if your code goes right. Need an integration-test including database-traffic, network, ... ? Test your real-world scenarios including the code that is neccessary to get and manage your entities. So it depends on what you want to test and what you expect from those tests.
Considering the DB-performance should play no effect on this decission as you cannot relate on any DB-specific optimization in your tests. What happens when you decide to change the DBMS to something without this kind of optimization? Than your tests will fail which is doubtfully what you want. Performance should never affect what to test, however it might play a rule on how to do so.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
My class has like 30-40 properties, and I really want to unit test.
But I have to create a moq instance (many of them, with different combinations etc).
Is there an easy way? This is real work!
My class can't be refactored, "trust me" (hehe, no really it can't, they are just properties of the object that are very tightly coupled).
Sounds like you need to do some major refactoring. I would start by taking a good look at the single responsibility principle, and making classes that will only have 1 reason to change. Once you break out functionality into separate classes that deal with only 1 responsibility, you can start writing tests for those classes, and they shouldn't take a page-full of mock objects.
This is the advantage of test-driven development -- you immediately run into the problems caused by huge classes, and are driven to avoid them if you want to be able to write tests.
Personally, I don't think you need to try every combination to test your class.
You mention lots about properties, but little about behavior. Shouldn't the tests be about behavior more than state?
There could well be situations where, due to the nature of the class, there are a lot of legitimate properties. I know, I've been there and done that. When examining that class, it is important to determine that each property really does belong in the one class, and not elsewhere. Single Responsibility Principle comes in play here.
Unfortunately, to break any tight coupling, it will take some time and effort to refactor. Just suck it up and get 'er done!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Has anyone worked at a large company, or on a very large project, that successfully used unit testing?
Our current database has ~300 tables, with ~100 aggregate roots. Overall there are ~4000 columns, and we'll have ~2 Million lines of code when complete. I was wondering - do companies with databases of this size (or much larger) actually go through the effort to Mock/Stub their domain objects for testing? It's been two years since I worked in a large company, but at the time all large applications were tested via integration tests. Unit testing was generally frowned upon if it required much set up.
I'm beginning to feel like Unit testing is a waste of time for anything but static methods, as many our test methods take just as long or longer to write than the actual code ... in particular, the setup/arrange steps. To make things worse, one of our developers keeps quoting how Unit Testing and Agile methods was such an abject failure on Kent Beck's Chrysler project ... and that it's just not a methodology that scales well.
Any references or experiences would be great. Management likes the idea of Unit Testing, but if they see the amount of extra code we're writing (and our frustration) they'd be happy to back down.
I've seen TDD work very well on large projects, especially to help us get a legacy code base under control. I've also seen Agile work at a large scale, though just doing Agile practices alone isn't sufficient I think. Richard Durnall wrote a great post about how things break in a company as Agile gains ground. I suspect Lean may be a better fit at higher levels in an organisation. Certainly if the company culture isn't a good match for Agile by the time it starts being used across multiple projects, it won't work (but neither will anything else; you end up with a company that can't respond effectively to change either way).
Anyway, back to TDD... Testing stand-alone units of code can sometimes be tricky, and if there's a big data-driven domain object involved I frequently don't mock it. Instead I use a builder pattern to make it easy to set that domain object up in the right way.
If the domain object has complex behaviour, I might mock that so that it behaves predictably.
For me, the purpose of writing unit tests is not really for regression testing. It helps me think about the behaviour of the code, its responsibilities and how it uses other pieces of code to help it do what it does. It provides documentation for other developers, and helps me keep my design clean. I think of them as examples of how you can use a piece of code, why it's valuable and the kind of behaviour you can expect from it.
By thinking of them this way I tend to write tests which make the code easy and safe to change, rather than pinning it down so nobody can break it. I've found that focusing on mocking everything out, especially domain objects, can cause quite brittle tests.
The purpose of TDD is not testing. If you want to test something you can get a tester to look at it manually. The only reason that testers can't do that every time is because we keep changing the code, so the purpose of TDD is to make the code easy to change. If your TDD isn't making things easier for you, find a different way to do it.
I've had some good experiences with mock objects and unit testing in projects where there was a lot of upfront design and a comfortable timeline to work with -- unfortunately that is often a luxury that most companies won't afford to take a risk on. GTD and GTDF methodologies really don't help the problem either, as they put developers on a release treadmill.
The big problem with unit tests are that if you don't buy-in from a whole team what happens is one developer looks at the code with rose colored glasses (and through no fault of their own) implements only the happy path tests which are what they can think of. Unit tests don't always get kept up as well as they should because corner cases slip by, and not everyone drinks the Kool-Aid. Testing is a very different mindset than coming up with the algorithms, and many developers really just don't know how think that way.
When iterations and development cycles are tight, I find myself gaining more confidence in the code quality by relying on static analysis tools and complexity tools. (FindBugs, PMD,Clang llvm etc) Even if they are in areas that you can't directly address, you can flag them as landmines and help better determine risk in implementing new features in that area.
If you find that mocking/stubbing is painfull and takes a long time then you probably have a design that is not made for unit-testing. And then you either refactor or live with it.
I would refactor.
I have a large application and see no trouble in writting unit-tests and when I do I know it's time to refactor.
Of course ther is nothing wrong with integration test. I actualy have those too to test the DAL or other parts of the application.
All the automated test should form a whole, unittest are just a part of those.
Yes they do. Quite extensively.
The hard part is getting the discipline in place to write clean code - and (the even harder part) the discipline to chip away at bad code by refactoring as you go.
I've worked in one of the world's biggest banks on a project that has been used from New York, London, Paris and Tokyo. It used mocks very well and through a lot of discipline we had pretty clean code.
I doubt that mocks are the problem - they're just a fairly simple tool. If you've got to rely on them super-heavily, say it looks like you need mocks of mocks returning mocks - then something has gone wrong with the test or the code...