How can I implement static 'test local' data in an NUnit test? - c#

I have a couple thousand NUnit tests for a library, many of which rely on having some statically available 'request context' that is scoped to the request being served and flows across tasks. The library consumer provides an implementation to retrieve the current request context.
I need to implement something to provide this context for our NUnit test project, where the context is scoped to each individual test run; each test run should have it's own object, and I should be able to access it from anywhere during the test.
Initially, I had achieved this using TestContext.Current.Properties and storing my request context there, but with a recent NUnit update, Properties has become read-only.
Is there any replacement that I can use to achieve 'test local' data? i.e. something that's scoped to the current test run, and statically accessible.

Similar issue on github contains the following statement from NUnit developer:
However, it's not intended that you should change the properties of an
NUnit Test, because Test and its derivatives are internal and the
implementation can change. The internal classes allow it because
custom attributes may need to do it, but I recommend that tests avoid
doing it.
And such implementation change has actually happen. Before NUnit 2.6.0, TestContext had Properties bag, but since 2.6.0 it was moved to TestAdapter. You still could access it via TestContext.CurrentContext.Test.Properties however you have no guarantee that this will not change again in the future.
The cleaner way to implement such context accessor is by adding simple holder that will compare current test against the test for which current context instance was created. If those tests do not match it just creates a new context instance and remember current test.
Here is a working sample:
internal static class ContextAccessor
{
private static TestExecutionContext currentRequestTest;
private static RequestContext currentRequestContext;
public static RequestContext Current
{
get
{
var currTest = TestExecutionContext.CurrentContext;
if (currentRequestTest == currTest)
{
return currentRequestContext;
}
currentRequestContext = CreateRequestContext();
currentRequestTest = currTest;
return currentRequestContext;
}
}
public static RequestContext CreateRequestContext()
{
return new RequestContext();
}
}
RequestContext here is your context class. CreateRequestContext() is basically a factory method that creates context. You could put any logic you need for creating new context instance.
Now in the test you could just call ContextAccessor.Current:
[Test]
public void SomeTest()
{
var context1 = ContextAccessor.Current;
var context2 = ContextAccessor.Current;
Assert.AreSame(context1, context2);
}
Sample Project on GitHub

Related

Is it best practice to test my Web API controllers directly or through an HTTP client?

I'm adding some unit tests for my ASP.NET Core Web API, and I'm wondering whether to unit test the controllers directly or through an HTTP client. Directly would look roughly like this:
[TestMethod]
public async Task GetGroups_Succeeds()
{
var controller = new GroupsController(
_groupsLoggerMock.Object,
_uowRunnerMock.Object,
_repoFactoryMock.Object
);
var groups = await controller.GetGroups();
Assert.IsNotNull(groups);
}
... whereas through an HTTP client would look roughly like this:
[TestMethod]
public void GetGroups_Succeeds()
{
HttpClient.Execute();
dynamic obj = JsonConvert.DeserializeObject<dynamic>(HttpClient.ResponseContent);
Assert.AreEqual(200, HttpClient.ResponseStatusCode);
Assert.AreEqual("OK", HttpClient.ResponseStatusMsg);
string groupid = obj[0].id;
string name = obj[0].name;
string usercount = obj[0].userCount;
string participantsjson = obj[0].participantsJson;
Assert.IsNotNull(name);
Assert.IsNotNull(usercount);
Assert.IsNotNull(participantsjson);
}
Searching online, it looks like both ways of testing an API seem to be used, but I'm wondering what the best practice is. The second method seems a bit better because it naively tests the actual JSON response from the Web API without knowing the actual response object type, but it's more difficult to inject mock repositories this way - the tests would have to connect to a separate local Web API server that itself was somehow configured to use mock objects... I think?
Edit: TL;DR
The conclusion you should do both because each test serves a different purpose.
Answer:
This is a good question, one I often ask myself.
First, you must look at the purpose of a unit test and the purpose of an integration test.
Unit Test :
Unit tests involve testing a part of an app in isolation from its
infrastructure and dependencies. When unit testing controller logic,
only the contents of a single action are tested, not the behaviour of
its dependencies or of the framework itself.
Things like filters, routing, and model binding will not work.
Integration Test :
Integration tests ensure that an app's components function correctly
at a level that includes the app's supporting infrastructures, such as
the database, file system, and network. ASP.NET Core supports
integration tests using a unit test framework with a test web host and
an in-memory test server.
Things like filters, routing, and model binding will work.
“Best practice” should be thought of as “Has value and makes sense”.
You should ask yourself Is there any value in writing the test, or am I just creating this test for the sake of writing a test?
Let's say your GetGroups() method looks like this.
[HttpGet]
[Authorize]
public async Task<ActionResult<Group>> GetGroups()
{
var groups = await _repository.ListAllAsync();
return Ok(groups);
}
There is no value in writing a unit test for it! because what you are doing is testing a mocked implementation of _repository! So what is the point of that?!
The method has no logic and the repository is only going to be exactly what you mocked it to be, nothing in the method suggests otherwise.
The Repository will have its own set of separate unit tests where you will cover the implementation of the repository methods.
Now let's say your GetGroups() method is more than just a wrapper for the _repository and has some logic in it.
[HttpGet]
[Authorize]
public async Task<ActionResult<Group>> GetGroups()
{
List<Group> groups;
if (HttpContext.User.IsInRole("Admin"))
groups = await _repository.FindByExpressionAsync(g => g.IsAdminGroup == true);
else
groups = await _repository.FindByExpressionAsync(g => g.IsAdminGroup == false);
//maybe some other logic that could determine a response with a different outcome...
return Ok(groups);
}
Now there is value in writing a unit test for the GetGroups() method because the outcome could change depending on the mocked HttpContext.User value.
Attributes like [Authorize] or [ServiceFilter(….)] will not be triggered in a unit test.
.
Writing integration tests is almost always worth it because you want to test what the process will do when it forms part of an actual application/system/process.
Ask yourself, is this being used by the application/system?
If yes, write an integration test because the outcome depends on a combination of circumstances and criteria.
Now even if your GetGroups() method is just a wrapper like in the first implementation, the _repository will point to an actual datastore, nothing is mocked!
So now, not only does the test cover the fact that the datastore has data (or not), it also relies on an actual connection being made, HttpContext being set up properly and whether serialisation of the information works as expected.
Things like filters, routing, and model binding will also work.
So if you had an attribute on your GetGroups() method, for example [Authorize] or [ServiceFilter(….)], it will be triggered as expected.
I use xUnit for testing so for a unit test on a controller I use this.
Controller Unit Test:
public class MyEntityControllerShould
{
private MyEntityController InitializeController(AppDbContext appDbContext)
{
var _controller = new MyEntityController (null, new MyEntityRepository(appDbContext));
var httpContext = new DefaultHttpContext();
var context = new ControllerContext(new ActionContext(httpContext, new RouteData(), new ActionDescriptor()));
_controller.ControllerContext = context;
return _controller;
}
[Fact]
public async Task Get_All_MyEntity_Records()
{
// Arrange
var _AppDbContext = AppDbContextMocker.GetAppDbContext(nameof(Get_All_MeetUp_Records));
var _controller = InitializeController(_AppDbContext);
//Act
var all = await _controller.GetAllValidEntities();
//Assert
Assert.True(all.Value.Count() > 0);
//clean up otherwise the other test will complain about key tracking.
await _AppDbContext.DisposeAsync();
}
}
The Context mocker used for unit testing.
public class AppDbContextMocker
{
/// <summary>
/// Get an In memory version of the app db context with some seeded data
/// </summary>
/// <param name="dbName"></param>
/// <returns></returns>
public static AppDbContext GetAppDbContext(string dbName)
{
//set up the options to use for this dbcontext
var options = new DbContextOptionsBuilder<AppDbContext>()
.UseInMemoryDatabase(dbName)
.Options;
var dbContext = new AppDbContext(options);
dbContext.SeedAppDbContext();
return dbContext;
}
}
The Seed extension.
public static class AppDbContextExtensions
{
public static void SeedAppDbContext(this AppDbContext appDbContext)
{
var myEnt = new MyEntity()
{
Id = 1,
SomeValue = "ABCD",
}
appDbContext.MyENtities.Add(myEnt);
//add more seed records etc....
appDbContext.SaveChanges();
//detach everything
foreach (var entity in appDbContext.ChangeTracker.Entries())
{
entity.State = EntityState.Detached;
}
}
}
and for Integration Testing: (this is some code from a tutorial, but I can't remember where I saw it, either youtube or Pluralsight)
setup for the TestFixture
public class TestFixture<TStatup> : IDisposable
{
/// <summary>
/// Get the application project path where the startup assembly lives
/// </summary>
string GetProjectPath(string projectRelativePath, Assembly startupAssembly)
{
var projectName = startupAssembly.GetName().Name;
var applicationBaseBath = AppContext.BaseDirectory;
var directoryInfo = new DirectoryInfo(applicationBaseBath);
do
{
directoryInfo = directoryInfo.Parent;
var projectDirectoryInfo = new DirectoryInfo(Path.Combine(directoryInfo.FullName, projectRelativePath));
if (projectDirectoryInfo.Exists)
{
if (new FileInfo(Path.Combine(projectDirectoryInfo.FullName, projectName, $"{projectName}.csproj")).Exists)
return Path.Combine(projectDirectoryInfo.FullName, projectName);
}
} while (directoryInfo.Parent != null);
throw new Exception($"Project root could not be located using application root {applicationBaseBath}");
}
/// <summary>
/// The temporary test server that will be used to host the controllers
/// </summary>
private TestServer _server;
/// <summary>
/// The client used to send information to the service host server
/// </summary>
public HttpClient HttpClient { get; }
public TestFixture() : this(Path.Combine(""))
{ }
protected TestFixture(string relativeTargetProjectParentDirectory)
{
var startupAssembly = typeof(TStatup).GetTypeInfo().Assembly;
var contentRoot = GetProjectPath(relativeTargetProjectParentDirectory, startupAssembly);
var configurationBuilder = new ConfigurationBuilder()
.SetBasePath(contentRoot)
.AddJsonFile("appsettings.json")
.AddJsonFile("appsettings.Development.json");
var webHostBuilder = new WebHostBuilder()
.UseContentRoot(contentRoot)
.ConfigureServices(InitializeServices)
.UseConfiguration(configurationBuilder.Build())
.UseEnvironment("Development")
.UseStartup(typeof(TStatup));
//create test instance of the server
_server = new TestServer(webHostBuilder);
//configure client
HttpClient = _server.CreateClient();
HttpClient.BaseAddress = new Uri("http://localhost:5005");
HttpClient.DefaultRequestHeaders.Accept.Clear();
HttpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
}
/// <summary>
/// Initialize the services so that it matches the services used in the main API project
/// </summary>
protected virtual void InitializeServices(IServiceCollection services)
{
var startupAsembly = typeof(TStatup).GetTypeInfo().Assembly;
var manager = new ApplicationPartManager
{
ApplicationParts = {
new AssemblyPart(startupAsembly)
},
FeatureProviders = {
new ControllerFeatureProvider()
}
};
services.AddSingleton(manager);
}
/// <summary>
/// Dispose the Client and the Server
/// </summary>
public void Dispose()
{
HttpClient.Dispose();
_server.Dispose();
_ctx.Dispose();
}
AppDbContext _ctx = null;
public void SeedDataToContext()
{
if (_ctx == null)
{
_ctx = _server.Services.GetService<AppDbContext>();
if (_ctx != null)
_ctx.SeedAppDbContext();
}
}
}
and use it like this in the integration test.
public class MyEntityControllerShould : IClassFixture<TestFixture<MyEntityApp.Api.Startup>>
{
private HttpClient _HttpClient;
private const string _BaseRequestUri = "/api/myentities";
public MyEntityControllerShould(TestFixture<MyEntityApp.Api.Startup> fixture)
{
_HttpClient = fixture.HttpClient;
fixture.SeedDataToContext();
}
[Fact]
public async Task Get_GetAllValidEntities()
{
//arrange
var request = _BaseRequestUri;
//act
var response = await _HttpClient.GetAsync(request);
//assert
response.EnsureSuccessStatusCode(); //if exception is not thrown all is good
//convert the response content to expected result and test response
var result = await ContentHelper.ContentTo<IEnumerable<MyEntities>>(response.Content);
Assert.NotNull(result);
}
}
Added Edit:
In conclusion, you should do both, because each test serves a different purpose.
Looking at the other answers you will see that the consensus is to do both.
TL;DR
Is it best practice to test [...] directly or through an HTTP client?
Not "or" but "and". If you serious about best practices of testing - you need both tests.
First test is a unit test. But the second one is an integration test.
There is a common consensus (test pyramid) that you need more unit tests comparing to the number of integration tests. But you need both.
There are many reasons why you should prefer unit tests over integration tests, most of them boil down to the fact that unit test are small (in all senses) and integration tests - aren't. But the main 4 are:
Locality
When your unit test fails, usually, just from it's name you can figure out the place where the bug is. When integration test becomes red, you can't say right away where is the issue. Maybe it's in the controller.GetGroups or it's in the HttpClient, or there is some issue with the network.
Also, when you introduce a bug in your code it's quite possible that only one of unit tests will become red, while with integration tests there are more chances that more than one of them will fail.
Stability
With a small project which you can test on you local box you probably won't notice it. But on a big project with distributed infrastructure you will see blinking tests all the time. And that will become a problem. At some point you can find yourself not trusting test results anymore.
Speed
With a small project with a small number of tests you won't notice it. But on a bit project it will become a problem. (Network delays, IO delays, initialization, cleanup, etc., etc.)
Simplicity
You've noticed it yourself.
But that not always true. If you code is poorly structured, then it's easier to write integration tests. And that's one more reason why you should prefer unit tests. In some way they force you to write more modular code (and I'm not taking about Dependency Injection).
But also keep in mind that best practices are almost always about big projects. If your project is small, and will stay small, there are a big chance that you'll be better off with strictly opposite decisions.
Write more tests. (Again, that means - both). Become better at writing tests. Delete them latter.
Practice makes perfect.
If we limit the scope of discussion to Controller vs HttpClient testing comparison, I would say that it is better to use HttpClient. Because if you write tests for your controllers, you're already writing integration tests already and there is almost no point to write "weaker" integration tests while you can write stronger ones that is more realistic and also superset of the weaker ones.
For example, you can see from your own example that both of your test are testing exactly the same functionality. The different is that the latter one cover more area of testing -- JSON response, or can be something else like HTTP header you want to test. If you write the latter test, you don't need the first test at all.
I understand the pain of how to inject mocked dependencies. This requires more effort comparing to testing controller directly. However, .NET Core already provides a good set of tools to help you on that. You can setup the test host inside the test itself, configure it and get HttpClient from it. Then you can use that HttpClient for your testing purpose.
The other concern is that it is quite a tedious task to craft HttpClient's request for each test. Anyway, Refit can help you a lot on this. Refit's declarative syntax is quite easy to understand (and maintain eventually). While I would also recommend Refit for all remote API calls, it is also suitable for ASP.NET Core integration testing.
Combining all solutions available, I don't see why you should limit to controller test while you can go for more "real" integration test with only some little more effort.
I never have liked mocking in that as applications mature the effort spent on mocking can make for a ton of effort.
I like exercising endpoints by direct Http calls. Today there are fantastic tools like Cypress which allow the client requests to be intercepted and altered. The power of this feature along with easy Browser based GUI interaction blurs traditional test definitions because one test in Cypress can be all of these types Unit, Functional, Integration and E2E.
If an endpoint is bullet proof then error injection becomes impossible from outside. But even errors from within are easy to simulate. Run the same Cypress tests with Db down. Or inject intermittent network issue simulation from Cypress. This is mocking issues externally which is closer to a prod environment.
When doing unit test, it is important to know what are you going to test and write the tests based on your requirements. However, the second test, might looks like an integration test instead of an unit test, but I do not care to this point now!
Between your tests, I would recommend you to use the second option, because in the second unit test, you are testing your WebApi, as an WebApi, not as a class. For example suppose that you have a class with a method named X(). So how likely is it to write an unit test for it using Reflection? If it is completely unlikely, then writing an unit test based on Reflection is a waste of time. If it is likely, so you should write your test using Reflection too.
Moreover, using the second approach you are able to change the tech stack(For replace .Net with php) used for producing the WebApi, without changing you tests(This is what we expect from a WebApi too).
Finally, you should make a decision! how are you going to use this WebApi? How likely is it to call your WebApi using direct class instantiating?
Note:
It might be irrelevant to your question, but you should concentrate on your Asserts, too. For example asserting ResponseStatusCode and ResponseStatusMsg might not be needed and you can assert only one.
Or what will happen if obj is null? or obj has more than one member?
I'd say that they are not mutually exclusive. The first option is a classical unit test while the second is an integration test as involves more than a single unit of code.
If I had time to write either unit tests or integration tests, I'd pick unit tests as provides a more focused approach and gives, at least in my opinion, the best result from cost benefit.
In some particular projects where I had enough resources to write different suites of tests, I wrote both tests covering approaches. Where the second one would run without mocking anything (or maybe just the persistent storage) so I could test how all the components integrate together.
In relation to good practices, if you want to do real unit test, then you have no option but picking option one as no external dependencies are allowed (HttpClient is an external dependency).
Then, if the time and resources allow it, you could do integration testing for the most critical and/or complex paths.
If you are looking for some non programming You can use Postman, and can create collection of requests and can test multiple requests one by one.
You can use Swagger (aka OpenAPI).
Install Swashbuckle.AspNetCore from nuget.
using Microsoft.OpenApi.Models;
//in Startup.ConfigureServices
public void ConfigureServices(IServiceCollection services)
{
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API", Version = "v1" });
});
}
//in Startup.Configure
public void Configure(IApplicationBuilder app)
{
app.UseSwagger();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1");
});
}
Finally, add "launchUrl": "swagger", in launchSettings.json

How to instantiate outside of a constructor?

How to replicate this code with Autofac syntax?
public static class MenuConfig
{
public static void Initialize()
{
var _menuService = DependecyFactory.GetInstance<IMenuService>();
Parameters.Menu = _menuService.Menu();
}
}
Before calling this a "duplicate question" please note that I'm looking for an Autofac command. I CANNOT inject the interface anywhere and then call "Resolve". What I need to is perform an "InstancePerRequest" inline and uninjected so I don't have to do this:
var _service = new Service(new Dependency(new context()));
LightInject has a method that allows instantiation from an interface OUTSIDE of a constructor like this:
var _service = DependecyFactory.GetInstance<IService>();
What is the equivalent method for Autofac?
When calling containerBuilder.Build() you get back a container which implements IContainer and ILifetimeScope, whenever you get hold of one of these interfaces, you can resolve types from it:
container.Resolve<IService>();
If you want this container to be static, you could add the container as a static property to the Program or Startup class (depending if you're creating a Console or ASP.NET application).
Remember that the root container will be around for the entire duration of your application, so this can result in unwanted memory leaks when used incorrectly. Also see the warning in the documentation.
Still, it's perfectly possible to do the memory management yourself by resolving an Owned<> version from your interface:
using (var service = Program.Container.Resolve<Owned<IService>>())
{
service.Value.UseService();
}
Anyway, since you mention a static class in the comments, the best solution is to change that into a non-static class and register it as a singleton with Autofac. Then you can inject a Func<Owned<IService>> serviceFactory into that singleton and create/dispose an instance of the service wherever you need it.
using (var service = serviceFactory())
{
service.Value.UseService();
}
This is simply not possible with Autofac. All other solutions involving Autofac will require code refactoring which may potentially break software functionality. So unfortunately, the most elegant and least disruptive solution is this:
var _service = new Service(new Dependency(new context()));
Since this is an edge case addressing only one part of the software, this compromise is acceptable. It would be nice, however, if Autofac implemented this functionality in some future release.

How to configure BsonSerializer without use of static registry?

Within a C# project we currently use static methods on BsonSerializer to register serializers for specific types. This happens once on app startup.
However, our acceptance tests start the app up before every test and shut it down after every test, and the second time the app starts up it fails when RegisterSerializer is called, as the registration from the previous test is still in the registry as it's a global static.
Is there any way to register serializers without relying on global statics? Or another strategy for avoiding this problem when running tests?
If you're using the MongoDB serializer, you could check if the serializer is already registered before registering it:
if(BsonSerializer.LookupSerializer<YourCusomType>().GetType() != typeof(YourCusomTypeSerializer))
{
BsonSerializer.RegisterSerializer(new YourCusomTypeSerializer());
}
If you go down this route you should take into account multithreaded scenarios, the above code is not thread safe.
Another option would be to register your own provider and skip the individual serializer registrations:
public class YourCustomSerializationProvider : IBsonSerializationProvider
{
public IBsonSerializer GetSerializer(Type type)
{
if (type == typeof(YourCusomType)) return new YourCusomTypeSerializer();
// fall back to Mongo's defaults
return null;
}
}
// Where you previously registered individual serializers you will now register your provider instead
BsonSerializer.RegisterSerializationProvider(new YourCustomSerializationProvider());
This approach would be a bit more IoC friendly and would give you a bit more control.
Can you register these serializers in static constructor? This way you can call it any number of times you want, the static constructor will not execute more than once.
public class BsonSerializerRegisterer
{
static BsonSerializerRegisterer()
{
BsonSerializer.RegisterSerializer(typeof(DateTime), new DateTimeSerializer(DateTimeKind.Utc));
BsonSerializer.RegisterSerializer(typeof(decimal), new DecimalSerializer(BsonType.Decimal128));
BsonSerializer.RegisterSerializer(typeof(decimal?), new NullableSerializer<decimal>(new DecimalSerializer(BsonType.Decimal128)));
BsonSerializer.RegisterSerializer(new EnumSerializer<MyAwesomeEnum>(BsonType.String));
}
public static void RegisterSerializers()
{
}
}
And then call:
BsonSerializerRegisterer.RegisterSerializers()
A nice article that can help any future readers
https://www.mydevhub.com/mongodb/adding-custom-type-converter-to-mongodb-in-c/
My solution was to add all BsonSerializer.* invokations in the static constructor of StartUp.cs

Any way to use Moq to imitate properties of a real-world constructor automatically?

I've got a controller with a lot of constructor injection:
public MoviesController(ISession session, IClientContext clientContext, PManager pManager, ISegmentationService segmentationService, IGeoLocator geoLocator, IBus bus)
{
_session = session;
_clientContext = clientContext;
_pManager = pManager;
_segmentationService = segmentationService;
_geoLocator = geoLocator;
_bus = bus;
}
From my understanding (just read about Mocking), I've got a lot of Mock object properties to manually set if I wish to make a comprehensive test suite based on this controller.
For one method I'm only using one service (I'd even like to automate that with little effort if possible):
public object Show(Guid id)
{
var movie = _session.Get<movie>(id);
return movie;
}
But in another there are many services being used - is there any way to set those Moq objects up quickly? I could really use some examples as I'm new to testing. It's an asp.net mvc project with webapi 1 bundled in (testing the webapi controller here)
As has been said in the comments, if you have common setup code, you can put it in a Setup method that is called automatically from your testing framework before each test. It's decorated with a Setup attribute if you're using Nunit TestInitialize if you're using MStest. If you're using XUnit then it's a bit different.
So, your class might look like this:
public class SomeTests {
Mock<ISession> _sessionMock;
Mock<IClientContext> _clientContextMock;
[Setup]
public void Setup() {
_sessionMock = new Mock<ISession>();
_clientContextMock = new Mock <IClientContext();
}
MovieController CreateSut() {
return new MovieController(_sessionMock.Object, _clientContextMock.Object, ...);
}
[Test]
public void TestSomething() {
_sessionMock.Setup(x=>...);
//...
var sut = CreateSut();
//...
}
}
If you're trying to get away from completely creating the mocks manually, then you might want to look at something like AutoFixture with AutoMoq. Which will automatically supply mock instances when creating objects that accept interfaces. AutoFixture can be quite useful, but there is a learning curve to using it effectively so you might want to look at a tutorial / quickstart.
You could also configure an IOC container to supply mock instances for your test project, although I've never gone down that route myself.
For your example, you only need to mock the session, and can leave all the other dependencies null, since their behaviour should be irrelevant to the behaviour you are testing:
Mock<ISession> mockSession = new Mock<ISesssion>();
MoviesController controller = new MoviesController(mockSession.Object, null,null,null,null,null);
There is no need for you to set up any mocks other than the ones you need for this particular test

Dependency Injection in MS Dynamics CRM

I am currently getting started with the extending of Microsoft Dynamics CRM using Plugins.
Is it possible to add Dependency injection to these plugins (for testing, loose coupling, etc. purposes)? Where can I register my IoC-container so that it's used over all the plugins of the same type?
We've been trying to unit test and apply dependency injection on our Dynamics CRM application. Unfortunately, as Microsoft support and consultants are confirmed, there is no supported way to do it. You may either transfer all of your plugin business logic to an another business class and apply dependency injection or stop thinking about it.
If you choose to fight back with Dynamics CRM, you need to define a static field on a plugin super class which will be your DI Container. As follows,
public abstract class SuperPlugin : IPlugin{
public void Execute(IServiceProvider serviceProvider){
// initialize a static container instance if not available
var containerWrapper = new ContainerWrapper{
Container = serviceProvider.GetService(typeof(IPluginExecutionContext)),
Resolver = //static resolver instance of dependency container
};
OnExecution(containerWrapper);
}
public abstract void OnExecution(IDependencyResolver resolver);
}
I really cannot understand why Microsoft doesn't simply let us register some components to the IServiceProvider implementation that they are using internally.
Ps. Since your SuperPlugin class is an IPlugin, you may forget to write the interface implementation on the sub class. But we encountered some bugs on Plugin Registration tool that is shipped with official Dynamics CRM SDK. So in case you may have the same problem you should also implement your plugins as follows,
public class MyPlugin : SuperPlugin, IPlugin{
public abstract void OnExecution(IDependencyResolver resolver){};
}
Edit: See a small example that explains the concept https://github.com/nakahparis/DIForCRM
Plugins in CRM are the Bane of Unit Testing:
Issues with non-plugin test
No way to temporarily disable
Easy to forget it is running
Issues with testing plugins themselves
Unable to unit test and attach to process
A lot to mock out, Pipeline, Service Provider etc
Runs multi-threaded
This has led me to the following solution for testing plugins:
Get rid of the plugin context as quickly as possible, extracting out all objects and service required from it right away.
Create an ExecutePlugin method to hook unit tests into, and immediately call this method after extracting the objects from the plugin context.
Push as much code as possible into the business layer.
This results in plugins that look like this (with a heavy use of extension methods to make this simpler):
public void Execute(IServiceProvider provider)
{
var context = provider.GetContext();
var service = provider.GetService(context);
var target = GetTarget<Contact>(context);
if (target == null || !target.ContainsAllNonNull(c => new
{
c.FirstName,
c.LastName,
}))
{
// Entity is of the wrong type, or doesn't contain all of the required attributes
return;
}
ExecutePlugin(service, target);
}
public void ExecutePlugin(IOrganizationService service, Contact target){
// Logic Goes Here
}
Once this is done, the only thing you need to unit test the ExceutePlugin is your own IOrganizationService that mocks out the required calls and you have your unit testing done. I don't even bother unit testing the Execute method. Either it'll work, or it won't and blow chow on the first use from within CRM.

Categories