.NET: Mock a service I do not control [closed] - c#

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
How can I mock a service that I do not have control of (an external companies service to which I have access but do not want to hit when running my unit tests).
I am writing my consumer in C# .NET 4.6. My intended aim is to be able to test the inner workings of my consumer (and then the library code that consumes that) without actually hitting the remote service. Unfortunately the remote service is rather complicated and has a large number of calls and types of its own.
Any assistance or pointers would be greatly appreciated.
Edit: After an immediate down vote with no comment; please let me add: I have goggled this but unfortunately did not get answers that helped me. Perhaps just poor search terms.
Edit 2: The remote service is asmx. My consumer is a .net 4.6 library that will act as the control library for any consuming user interface (be it wpf, winform, mvc, etc).

You can abstract the external service functionality into an interface (IExternalService), and then create another implementation for it (besides the original one), a mock one: MockExternalService. This one could be as simple as just return some dummy data, or can also have some logic inside (return different response depending on a certain input in method params, for example).
You then need to wire all this up so that through a certain mechanism (custom header/web.config setting/db setting, for example) your consumer can swap between the 2 different service implementations (this somewhat implies that you're using Dependency Injection).

Right click on the imported Service Reference class, choose "Extract Interface" and you have the beginnings of your mock. Change all your existing code that references the concrete service reference class to referencing the new interface, then where you create a new instance of the concrete class, replace that code with a factory using your favourite IOC framework (or just write the factory yourself).
Then create a "TestServiceEndpoint" class implementing the extracted interface and start writing your mock responses.
Alternatively use the Visual Studio Fakes capability to create a Fake implementation and write tests with that. This is only really suitable for a unit test framework environment though, so if you are trying to stub out the endpoint so you can test the rest of your application, the first approach is the best.

Related

Unit test , large setups/fixtures [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Lets say we have a Service with other servcies on DI. The method under test does somehting with the input data, does some validations, calls a couple of these injected services (can get data only or modify data and then return something) and then returns something.
Given this scenario, I need to write many cases testing all possible behaviors, like validations exceptions, not found exceptions, business exceptions, normal flow, etc...
The problem is that I need to mock all the methods on an injected service for the setup. This could grow fast.
What's the best approach for fixtures and setups (mocking dependencies) in this large/complex method? Is there a pattern that solves this?
For data mocking I use builder pattern wich simplifies the task very well.
You should try create independent classes, which you could test without introducing too many dependencies, but in some point there will be a class which uses other components(for example ViewModel). In such cases I use:
https://github.com/AutoFixture/AutoFixture
It helps in creation system/class under test and helps with injecting dependencies. You can use it with NSubstitute but not only with it.
Using AutoFixture you can create mock classes which you will examine, but rest dependencies which will be not needed AutoFixture will auto-generate for you, so extending a constructor will not lead to modifying bunch of unit tests.

DTO, Data Layer and types to return [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
At my job we use Entity Framework for data access. I am creating a Data Access Layer, a Business Access Layer and a few different types of projects to access the BLL (webAPI for client applications to interface with, multiple MVC websites and a few different desktop WinForm applications).
I added the DTOs to a separate project named "DTO". The goal of this project within this solution is to have a DLL with all the definitions for the classes and interfaces that will be past back and forth. That way this one project can be created as a git submodule within other solutions and updated for all the UI projects to use collectively. I will not be working on all the UIs as we bring more developers into the project and we will probably need to have multiple VS solutions.
My thought was to have the Data Access Layer pass back and take in DTOs instead of entity objects. This would decouple the process completely.
If we ever wanted to replace the DAL with something else as long it followed the interfaces defined in the DTO project we should be fine. I also think it would make testing easier as I can replace the DAL with a project to generate the DTOs with something like Seed.net. BTW replacement is a real possibility given our environment.
Is adding this layer of complexity bad or against design standards? Is there something I am missing?
This is the way I work, and having worked in the Cloud world for some years now, it seems to be the way everyone works.
Typically you have the following Projects (each build to an individual Assembly)
- REST controllers
- Models
that are used to pass information between Controller layer and Business Logic
- Business Logic Interfaces (like ImyService)
- Business Logic (like myService)
- DTOs
- IRepository (like ImyRepo)
- Repository (like myRepo)
--> this is the same as DAL.
The great thing with doing this is that if you add Dependency Inversion (IoC) then you can make a mock Repo, in order to isolate and test the Service (Business Logic) layer and so on by injecting it into NUnit unit tests.
Quite often people in the industry (including me) use AutoMapper to convert Models to DTOs to Entities and the reverse.

Passing HttpStatusCodes throughout different layers in web api application [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm writing a web api app that I have divided into various projects such Web, Services, DataAccess - so basically the web api controller contacts the service layers which then can access the data access layer.
I was returning just a bool to let me know if the data access method has completed ok, then picking this up in the service layer and then back to the controller...where I can then respond with a HTTPStatusCode of 200, or 500 etc..depending whether or not the operation has returned a true or false.
Instead of bool is it good practice to use HttpStatusCodes instead...or should HTTP status codes only be used in the Controller - to return a response to the app that's calling the web api or should it be something else?
Thanks,
First of all classes should have the least possible knowledge of the world around them. Suppose you implement the repository pattern to fetch data. Your repository (data access layer) should not even know about HTTP, nor it should expect to be a part of web application. Its only concern is accessing a particular table.
It’s difficult to suggest specific solution without understanding the big picture, but you may consider the following:
Raise an exception if your application depends on data that couldn’t be fetched. It’ll propagate as 500 response.
Use enum instead of bool to make code more readable.
Create DataResponse class to incapsulate result of data access operation. You may then use the adapter pattern to adapt DataResponse to HttpResponse.
Vague question, but I'll attempt an answer.
This really depends on the reason for separation between the layers, and what each layer is concerned with. One question I would ask myself is why do you have a Service layer? Is it because it contains the business logic? Is it because intent is to have an option to reuse it outside WebAPI context? Or do you expect Service layer to have dependency on WebAPI context (i.e. that it is a web request, and not service being reused say inside a winform.)
Most likely, you want to constrain dealing with HTTP particulars to the Controller (IMHO, this is obviously just my opinion). But I'd refrain from using it as a hard and fast rule.
You shouldn't be propagating http status codes down or up the line. If you do then you are injecting dependency on what you worked so hard to decouple. One of the great things about N-tier architecture is that yeah, your web layer may be primarily used for interacting with your service layer but what happens when you want to hook up a native mobile application to call it, or a windows service to call it, or a desktop app to call it. You are basically handicapping its potential by trying to persist that error up and down the chain.

Injection or creating instance with new() [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm a C# programmer and I'm thinking about dependency injection. I've read "Dependency injection in .NET" book and patterns and antipatterns of DI are quite clear for me. I use pattern injection in constructor most of the time. Are there any cases in wich it is preferrable to create instances directly rather than using a Dependency Injection framework?
Using Dependency Injection has the advantage of making code Testable, however abusing DI pattern makes code harder to understand. Take in example this framework (ouzel, I'm not affiliated in any way. I just liked the way it was designed) wich I started recently to follow, as you see most classes have dependencies injected, however there is still a single instance shared without constructor injection sharedEngine.
In that particular case I find the author did a good choice, that makes the code overall simpler to understand (simpler constructors, less members) and eventually more performant (you don't have a shared pointer stored in every instance of every class of the engine).
Still its code can be tested because you can replace that instance (global) with a mock (the worst point of globals is that initialization order and their dependencies are hard to track, however if you limit to few globals with no or few dependencies this is not a problem). As you see you are not always forced to inject everything from constructor (and I wrote a DI injection framework for C++).
The problem is that people think is always good injectin everything from constructor so you suddendly start seeing frameworks that allow to inject everything (like int or std::vector<float>) while in reality that's the worst idea ever (infact in my simple framework I allow just to inject classes) since code becomes harder to understand because you are mixing configuration values with logic configuration and you have to travel through more files to get a grasp of what code is doing.
So, constructor injection is very good, use it when it is proper, but it is not the Jack-of-all-trades like everything in programming you have to not abuse it. Best of all try to understand good examples of every programming practice/pattern and then roll your own recipe, programming is made of choices, and every choice have good and bad sides.
When is it Ok (and by "OK" I mean you will still be able to test the code, as it were not coupled to concrete instances) to call "new":
You need Polymorphis, most times it is easier to create the new class than configuring that using a DI framework
You need a object factory, usually the factory itself is injected, however the factory code call "new" explicitly
You are calling "new" in the main
The object you are creating with "new" has no dependencies, and thus using it inside a class does not make the class harder to test (in example you create standard .NET containers with new, doing otherwise results in much more confusion)
The object you are creating is a global instance wich do not rely on order of initialization and its dependencies are not visible otherelse (you can mock the instance as long as you access it through a interface).
The above list provide situations in wich even when using a DI framework (like Ninject) it is ok to call "new" without removing the possibility to test your code, even better, most times you use DI in the above cases you usually end up with more complex code.

Test Driven Development with very large Mock [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am working for a consulting company that develops a lot of Add-Ons for SAP Business One using .NET. I want to introduce TDD (or at least some good unit testing practices) to the company to increase code quality. Here's the problem.
SAP provides a COM object (called Company) that lets you interact with the data in SAP. Company is an interface, so it can be mocked, but the amount of Mocking that would have to be done to get a single test to run is huge! I've tried it out with a few tests, and although they work, I really had to have a good understanding of the internals of the unit that I was testing, in order to create tests that passed. I feel that this very much defeats the purpose of the unit tests (I'm testing the internals as opposed to the interface).
Currently, through the use of dependency injection, I've created a Mock Company object that returns some Mock Documents that will sometimes return Mock values based on different circumstances, just to get the tests to run. Is there a better way? Has anyone been able to effectively unit test code that heavily depends on some external library? Especially when the results of the tests should be some change to that mocked object? (Say, when the add-on runs, the Mock Company object's SaveDocument function should be called with this Mock document).
I know this may be a strange question, but the fact of the matter is that in order to get these unit tests to run well, I feel like the only option to me is to create a really...reaally large mock that handles multiple Mock Documents, knows when to give the documents at the right time, and a lot of other things. It'd be essentially mocking out all of SAP. I don't know if there's some other best practice that others do in these cases.
Thanks in advance!
EDIT: Carl Manaster:
You're probably right. I think the problem is that most of the existing code base is very procedural. A lot of Windows services with a Run() method. I can definitely see how, if the project was structured a bit better, tests could be made with a lot more ease.
But let's say that the company can't invest in refactoring all of these existing projects. Should I just abandon the idea of unit testing these things?
If your methods are short enough, you should be able to mock only the interactions with one entity (Company), without interacting with the entities it returns. What you need is for your method to call (let's say) company.getDocument(). If your method under test has further interactions with the returned document at that point, split out that code, so that you can test that code's interactions with a (mocked) Document, without worrying about the Company in that test. It sounds as though your methods are currently much too long and involved for this kind of approach, but if you whittle away at them to the point where testing one method simply verifies that company.getDocument was called, you will find it much easier to test, much easier to work with, and ultimately much easier to understand.
Update
To your question of whether you should abandon the idea of unit testing: Do you want it to work? Do you have changes to make to the code? If the answers are (as I would assume) affirmative, then you should probably persevere. You don't have to test everything - but test what you're changing. You don't have to refactor everything - but refactor what you're working on so it's easier to test and easier to change. That "whittling away" that I mentioned: do that in service of solving the problems you have at the moment with your code base; over time you will find the code that most needed the tests has been tested - and it's a lot easier to work with because it's well tested and better factored.

Categories