Should I avoid hitting the database on tests as a shortcut to fill object with valid data? [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have a test suite which works against a dedicated copy of the real database. The application creates a complex object by filling it from the database and it would be a lot of work do create one "manually" or mocking one to bring it to a valid state. So I ran a database query from the tests in order to have a valid object (not to verify that I integrate with the database correctly). It was working blazing fast. Especially after the first call MSSQL was caching it and the query ran in less than 1 ms.
Are there any arguments why I should avoid doing this? If it's because speed when the database is on the same network it's working fast? It seems that most literature out there wouldn't recommend doing this - but why?
EDIT - To answer my own question: "unit tests" means that each test is autonomous, if you touch the database one test could modify it and affect another test. Even though transactions can solve it, it's still not quite in the "spirit" of "unit tests" and make them a bit cumbersome. So this should be avoided but not under all circumstances, if I have no choice I'll access the database in transaction which will make sure it won't affect other tests.

This seems to be a principle some people follow - that you should never hit the database - but my experience is that sometimes, trying too hard to avoid the database creates these giant tests, over-use of mocks, or a strange and brittle data access interface. Search about for test-induced design damage for more on this idea.
For my part, I'm happy to access a database as part of tests. You can often do write tests if you can wrap the whole test in a transaction, too.

We split up our tests in unit tests and integration tests/dlls. The unit tests cannot go to the db, the integration tests can.
Keep in mind that having a lot of integration tests can seriously slow your build. I can run all my unit tests in minutes while running the integration tests take over an hour.

You have to define for yourself WHAT you want to test? Do you want a test that simply checks if your API does what it should? Then you can mock even that single database-entity from your tests and test if your code goes right. Need an integration-test including database-traffic, network, ... ? Test your real-world scenarios including the code that is neccessary to get and manage your entities. So it depends on what you want to test and what you expect from those tests.
Considering the DB-performance should play no effect on this decission as you cannot relate on any DB-specific optimization in your tests. What happens when you decide to change the DBMS to something without this kind of optimization? Than your tests will fail which is doubtfully what you want. Performance should never affect what to test, however it might play a rule on how to do so.

Related

Hundreds of failing unit tests [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have inherited a very old (first commits are in 1999) code base and have found 500 of the 2000 or so unit tests to be failing. My question is, should I go through each test manually and check if it is still relevant or should I start over?
Nobody here can answer this as such, but you have to ask for each test:
Does this test still make sense? If not, remove it.
Is the test testing something that should work? Do something to fix it up.
Is the test conceptually useful, but what it tests has changed so it is now failing? Rewrite it so that it works in its new way.
How much effort to fix vs. value is the test? If it's a lot of effort, and low value, maybe remove it...
We can't really say whether you should do one thing or another.
It's probably worth just LOOKING at the tests, and especially looking at the effort of fixing the test before starting any real work.
You may also need to consult with some kind of test-manager for your group, and seek their input to the coverage/bug rate/common problems, etc for that part of the code.
When I look at old tests in our code base, it's sometimes best to remove, sometimes worth "fixing up" and sometimes worth starting from scratch. Unless you are familiar with the test, it's hard to say before you spend some effort on investigating the issue...

How to do Data independent unit testing using NUnit [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am looking to run Nunit test but I do not want the tests to be data dependent.
For eg: If I am running my unit test on testing server referring to testing database and if some user changes database values;
it should not have an impact on my testing scenarios.
However I want my testing scenarios to refer to oracle Stored procedures.
Thanks....any help would be highly appreciated.
Also I am open to the idea of any other tool which has the ability to achieve this.
If you are really hitting the database this not a unit test but integration test.
Basically you have two options which one with it's pros and cons:
Keep with the idea of integration tests but ensure somehow that the data you are using is as you expected. This can be achieved using stored procedure in your testing database that recreate your data while calling it, you can call this procedure in your tests initialization and then do all of your testing. The main disadvantage here is that the test will take more time than unit test and will cost more resources.
The main advantage is that you can be sure you're code integrates well with your database.
Choosing to use a real unit tests, in this option you will not going to use the database at all but instead create in-memory objects that represents the data from your database.
Because you will create this objects in the arrange part of your unit test you can know extacly what data they are holding.
The main disadvantage here is that you can't be sure you're code integrates well with your database.
The main advantage is that the test will take less time than integeration test and will cost less resources, moreover your test can be run even if your testing database is down.
If you want you can actually choose to use both options, this is useful because each test is testing your code from a different perspective.
More about unit tests vs integeration tests can be found here.

Test Driven Development with very large Mock [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am working for a consulting company that develops a lot of Add-Ons for SAP Business One using .NET. I want to introduce TDD (or at least some good unit testing practices) to the company to increase code quality. Here's the problem.
SAP provides a COM object (called Company) that lets you interact with the data in SAP. Company is an interface, so it can be mocked, but the amount of Mocking that would have to be done to get a single test to run is huge! I've tried it out with a few tests, and although they work, I really had to have a good understanding of the internals of the unit that I was testing, in order to create tests that passed. I feel that this very much defeats the purpose of the unit tests (I'm testing the internals as opposed to the interface).
Currently, through the use of dependency injection, I've created a Mock Company object that returns some Mock Documents that will sometimes return Mock values based on different circumstances, just to get the tests to run. Is there a better way? Has anyone been able to effectively unit test code that heavily depends on some external library? Especially when the results of the tests should be some change to that mocked object? (Say, when the add-on runs, the Mock Company object's SaveDocument function should be called with this Mock document).
I know this may be a strange question, but the fact of the matter is that in order to get these unit tests to run well, I feel like the only option to me is to create a really...reaally large mock that handles multiple Mock Documents, knows when to give the documents at the right time, and a lot of other things. It'd be essentially mocking out all of SAP. I don't know if there's some other best practice that others do in these cases.
Thanks in advance!
EDIT: Carl Manaster:
You're probably right. I think the problem is that most of the existing code base is very procedural. A lot of Windows services with a Run() method. I can definitely see how, if the project was structured a bit better, tests could be made with a lot more ease.
But let's say that the company can't invest in refactoring all of these existing projects. Should I just abandon the idea of unit testing these things?
If your methods are short enough, you should be able to mock only the interactions with one entity (Company), without interacting with the entities it returns. What you need is for your method to call (let's say) company.getDocument(). If your method under test has further interactions with the returned document at that point, split out that code, so that you can test that code's interactions with a (mocked) Document, without worrying about the Company in that test. It sounds as though your methods are currently much too long and involved for this kind of approach, but if you whittle away at them to the point where testing one method simply verifies that company.getDocument was called, you will find it much easier to test, much easier to work with, and ultimately much easier to understand.
Update
To your question of whether you should abandon the idea of unit testing: Do you want it to work? Do you have changes to make to the code? If the answers are (as I would assume) affirmative, then you should probably persevere. You don't have to test everything - but test what you're changing. You don't have to refactor everything - but refactor what you're working on so it's easier to test and easier to change. That "whittling away" that I mentioned: do that in service of solving the problems you have at the moment with your code base; over time you will find the code that most needed the tests has been tested - and it's a lot easier to work with because it's well tested and better factored.

my class has 30 properties, unit testing is a pain no? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
My class has like 30-40 properties, and I really want to unit test.
But I have to create a moq instance (many of them, with different combinations etc).
Is there an easy way? This is real work!
My class can't be refactored, "trust me" (hehe, no really it can't, they are just properties of the object that are very tightly coupled).
Sounds like you need to do some major refactoring. I would start by taking a good look at the single responsibility principle, and making classes that will only have 1 reason to change. Once you break out functionality into separate classes that deal with only 1 responsibility, you can start writing tests for those classes, and they shouldn't take a page-full of mock objects.
This is the advantage of test-driven development -- you immediately run into the problems caused by huge classes, and are driven to avoid them if you want to be able to write tests.
Personally, I don't think you need to try every combination to test your class.
You mention lots about properties, but little about behavior. Shouldn't the tests be about behavior more than state?
There could well be situations where, due to the nature of the class, there are a lot of legitimate properties. I know, I've been there and done that. When examining that class, it is important to determine that each property really does belong in the one class, and not elsewhere. Single Responsibility Principle comes in play here.
Unfortunately, to break any tight coupling, it will take some time and effort to refactor. Just suck it up and get 'er done!

Do large enterprises utilize mocking/stubbing? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Has anyone worked at a large company, or on a very large project, that successfully used unit testing?
Our current database has ~300 tables, with ~100 aggregate roots. Overall there are ~4000 columns, and we'll have ~2 Million lines of code when complete. I was wondering - do companies with databases of this size (or much larger) actually go through the effort to Mock/Stub their domain objects for testing? It's been two years since I worked in a large company, but at the time all large applications were tested via integration tests. Unit testing was generally frowned upon if it required much set up.
I'm beginning to feel like Unit testing is a waste of time for anything but static methods, as many our test methods take just as long or longer to write than the actual code ... in particular, the setup/arrange steps. To make things worse, one of our developers keeps quoting how Unit Testing and Agile methods was such an abject failure on Kent Beck's Chrysler project ... and that it's just not a methodology that scales well.
Any references or experiences would be great. Management likes the idea of Unit Testing, but if they see the amount of extra code we're writing (and our frustration) they'd be happy to back down.
I've seen TDD work very well on large projects, especially to help us get a legacy code base under control. I've also seen Agile work at a large scale, though just doing Agile practices alone isn't sufficient I think. Richard Durnall wrote a great post about how things break in a company as Agile gains ground. I suspect Lean may be a better fit at higher levels in an organisation. Certainly if the company culture isn't a good match for Agile by the time it starts being used across multiple projects, it won't work (but neither will anything else; you end up with a company that can't respond effectively to change either way).
Anyway, back to TDD... Testing stand-alone units of code can sometimes be tricky, and if there's a big data-driven domain object involved I frequently don't mock it. Instead I use a builder pattern to make it easy to set that domain object up in the right way.
If the domain object has complex behaviour, I might mock that so that it behaves predictably.
For me, the purpose of writing unit tests is not really for regression testing. It helps me think about the behaviour of the code, its responsibilities and how it uses other pieces of code to help it do what it does. It provides documentation for other developers, and helps me keep my design clean. I think of them as examples of how you can use a piece of code, why it's valuable and the kind of behaviour you can expect from it.
By thinking of them this way I tend to write tests which make the code easy and safe to change, rather than pinning it down so nobody can break it. I've found that focusing on mocking everything out, especially domain objects, can cause quite brittle tests.
The purpose of TDD is not testing. If you want to test something you can get a tester to look at it manually. The only reason that testers can't do that every time is because we keep changing the code, so the purpose of TDD is to make the code easy to change. If your TDD isn't making things easier for you, find a different way to do it.
I've had some good experiences with mock objects and unit testing in projects where there was a lot of upfront design and a comfortable timeline to work with -- unfortunately that is often a luxury that most companies won't afford to take a risk on. GTD and GTDF methodologies really don't help the problem either, as they put developers on a release treadmill.
The big problem with unit tests are that if you don't buy-in from a whole team what happens is one developer looks at the code with rose colored glasses (and through no fault of their own) implements only the happy path tests which are what they can think of. Unit tests don't always get kept up as well as they should because corner cases slip by, and not everyone drinks the Kool-Aid. Testing is a very different mindset than coming up with the algorithms, and many developers really just don't know how think that way.
When iterations and development cycles are tight, I find myself gaining more confidence in the code quality by relying on static analysis tools and complexity tools. (FindBugs, PMD,Clang llvm etc) Even if they are in areas that you can't directly address, you can flag them as landmines and help better determine risk in implementing new features in that area.
If you find that mocking/stubbing is painfull and takes a long time then you probably have a design that is not made for unit-testing. And then you either refactor or live with it.
I would refactor.
I have a large application and see no trouble in writting unit-tests and when I do I know it's time to refactor.
Of course ther is nothing wrong with integration test. I actualy have those too to test the DAL or other parts of the application.
All the automated test should form a whole, unittest are just a part of those.
Yes they do. Quite extensively.
The hard part is getting the discipline in place to write clean code - and (the even harder part) the discipline to chip away at bad code by refactoring as you go.
I've worked in one of the world's biggest banks on a project that has been used from New York, London, Paris and Tokyo. It used mocks very well and through a lot of discipline we had pretty clean code.
I doubt that mocks are the problem - they're just a fairly simple tool. If you've got to rely on them super-heavily, say it looks like you need mocks of mocks returning mocks - then something has gone wrong with the test or the code...

Categories