I'm currently learning how to unit test my methods in a silverlight RIA project. I have some methods that the user is authorized. I though I could solve this problem by creating a mock authorizationservice. And then have the user be authorized in that way, but it seems that I a nullreference from the code because it calls the authorizationservice in the project from which the code I'm testing originates, and gets the nullreference in the createdefaultuser method, which I had otherwise manually overridden in the mockauthorizationservice.
How do I get about this?
My mockauthorization has this namespace/class definitionnamespace
Notlr.Test
{
public class MockAuthentication : AuthenticationService
{
}
}
My Ria authenticationservice looks like this:
namespace Notlr.Web
{
using System;
using System.ServiceModel.DomainServices.Hosting;
using System.ServiceModel.DomainServices.Server.ApplicationServices;
using System.Web.Security;
/// <summary>
/// RIA Services DomainService responsible for authenticating users when
/// they try to log on to the application.
///
/// Most of the functionality is already provided by the base class
/// AuthenticationBase
/// </summary>
[EnableClientAccess]
public class AuthenticationService : AuthenticationBase<User>
{
}
}
Jakob, it sounds like you have namespace issues. Remember that your "test project" in Visual Studio is a Project like any other. It has its own namespace and compiles into its own .net assembly. Having a "test project" does not automatically put the mock objects into the tests; you must write the code to accomplish this on your own.
You need to make sure that you're writing the test code to use the mock authorizationservice when the test runs. If you would like more specific help, please post the code that you're having a problem with.
Related
I'm trying to split a Core 2.1 WebAPI project into two in order that we can expose two different APIs according to circumstances. Simplified, we have one API and we want all the read-only (GET) requests in one API and the entire set in another (the "admin" API). Swagger is enabled in the projects.
I duplicated the project, renaming one (namespaces, etc.) and adding both to the same solution, then commented out all the non-GET controller methods in the read-only project and commented out all the GET methods in the admin project. I then added a reference to the read-only project in the admin project.
Running the read-only project, the swagger page came up fine, just the GETs. Running the admin project gave a 500 on the swagger page. Interestingly, during debugging, I found that removing All the controllers from the admin project, the underlying API from the read-only project was completely exposed straight through and appeared fully functional - not something I was expecting and a potential security issue for anyone not expecting it.
However, I then added one controller back and changed it to decend from one of the read-only controllers, over-riding the ancestor contructor, etc. - it still gave a 500.
Base class:
namespace InfoFeed.WebAPI.Features.Account
{
/// <summary>
/// Handle user account related tasks
/// </summary>
[Authorize]
[Produces("application/json")]
[Route("api/account")]
public class AccountController : Controller
{
private readonly ILogger _log;
protected readonly IMediator _mediator;
public AccountController(ILogger<AccountController> log,
IMediator mediator)
{
_log = log;
_mediator = mediator;
}
Descendent class:
namespace InfoFeedAdmin.WebAPI.Features.Account
{
/// <summary>
/// Handle user account related tasks
/// </summary>
[Authorize]
[Produces("application/json")]
[Route("api/account")]
public class AccountAdminController
: InfoFeed.WebAPI.Features.Account.AccountController
{
public AccountAdminController(ILogger<AccountAdminController> log,
IMediator mediator)
: base(log, mediator)
{
}
I thought that perhaps the route might be causing a clash so I tried changing that to [Route("api/admin/account")] - this worked as long as there were no clashing method signatures. However, it means that there are two sets of routes exposed to the same underlying controller methods.
POST /api/account/signin
GET /api/account/signout
POST /api/admin/account/signin
GET /api/admin/account/signout
Does anyone know how I can hide (perhaps selectively) the routes from the ancestor class so that only the routes I choose to expose from the descendent class are visible/accessible?
Cheers
By default MVC will search the dependency tree and find controllers (even in other assemblies).
You can use application parts to avoid looking for controllers in a particular assembly or location.
If you have an assembly that contains controllers you don't want to be used, remove it from the ApplicationPartManager:
services.AddMvc()
.ConfigureApplicationPartManager(apm =>
{
var dependentLibrary = apm.ApplicationParts
.FirstOrDefault(part => part.Name == "DependentLibrary");
if (dependentLibrary != null)
{
p.ApplicationParts.Remove(dependentLibrary);
}
})
Source: https://learn.microsoft.com/en-us/aspnet/core/mvc/advanced/app-parts?view=aspnetcore-2.1
I have a Winforms application that is designed to integrate with external software packages. This application reads data from these packages and pushes it to our server where users log in and use our application (App).
public abstract ClassToImplement
{
public abstract void DefinedMethod1();
public abstract void DefinedMethod2();
}
When we designed the application it was intended to do 95% of the integration work with the remaining 5% (implementation class / App2) being developed by a consultant who's familiar with the 3rd party software.
public class Implemented : ClassToImplement{
public override void DefinedMethod1(...);
public override void DefinedMethod2(...);
}
The "App" outputs a Class Library which is then referenced in the Implementation (App2). In our design we created an Abstract Class and defined the methods. The idea was that the consultant would download the repo for the implementation class and include the App as a reference. They would then write the necessary code for the methods they're implementing, compile and "voila!"
For obvious reasons I don't want to share the source project with external developers, otherwise I'd just share the full solution and use a single app, and, while I know they can see a lot with the DLL reference, it is just easier for us to control everything.
The problem comes with App: the main application algorithm needs to instantiate the implementation class and then the program runs perfectly.
in Form1.cs of App:
ClassToImplement impObj = new Implemented();
impObj.DefinedMethod1();
impObj.DefinedMethod2();
The challenge I'm having is that I cannot build "App" to output a DLL without instantiating the Class. I cannot instantiate the Implemented Class as I haven't got the code (yet).
It would be great to know how to go about achieving this sort of abstraction with a dependancy on (yet) unwritten code and also, what is the technical term for what I'm trying to do?
To make it just "work" use a Func which returns an instance of the abstract class.
In your secret repo:
//Your "App" DLL Project
public abstract class ClassToImplement
{
public abstract void DefinedMethod1();
public abstract void DefinedMethod2();
}
public class App : Form
{
public App(Func<ClassToImplement> initiator)
{
InitializeComponent();
ClassToImplement ci = initiator.Invoke();
ci.DefinedMethod1();
ci.DefinedMethod2();
}
}
//This is in a separate project which will be your startup project internally
public class Dummy : ClassToImplement
{
public override void DefinedMethod1(){}
public override void DefinedMethod2(){}
}
public class Program
{
public static void Main()
{
Application.Run(new App(()=> new Dummy()));
}
}
In the repo shared with the consultant:
// In the repo which is shared with the consultant
// This will be the startup project on the build server, and when the consultant is testing.
public class Implementation : ClassToImplement
{
public override void DefinedMethod1(){}
public override void DefinedMethod2(){}
}
public class Program
{
public static void Main()
{
Application.Run(new App(()=> new Implementation()));
}
}
On your build server, you can pull from both the repos, and set the startup project as the one given to the consultant. But when you are testing and developing internally, you set the startup project to your version with an implementation that does nothing.
As a side note, if you think what you are doing needs to be protected from consultants who have signed a confidentiality agreement, make sure to obfuscate when you do a release.
This is a two-step process usually:
Locate and load the assembly/dll:
Assembly assembly = Assembly.LoadFrom(DLL);
Instantiate the implemented class:
Type type = assembly.GetType(FullNameOfImplemented);
AppInstance = (ClassToImplement)Activator.CreateInstance(type, parameters);
The process you are looking for is often called stubbing. In this case you've chosen to encapsulate the integration functionality in a library, not web services, but the principle is the same.
The idea was that the consultant would download the repo for the implementation class and include the App as a reference.
This sounds like you've got the dependency relationship the wrong way round. If the consultant's code references your app, then your app can't reference it - it'd be a circular dependency. Instead, factor your app something more in line with the following:
App
|
|
App.Integration.Contracts
^ ^
| |
| App.Integration.Stub
|
App.Integration
The abstract class - it could just as easily be an interface in C# - resides in the Contracts assembly. This is the only compiled dependency your application has. Then at runtime use configuration to load either the stub, or the full implementation using an IoC container. An example is Unity for which you will need its configuration API. Reference the true type to use in the configuration file and change only that to update your application to use the full functionality.
First off I think you need to implement a proper plugin system if you dont want to share your code with that other developers.
Second you should code against your interface and not against its implementation. First because you dont have it and second because you may want to switch implementations for different 3rd party software.
If you need an instance for testing or stuff, you can use a handwritten mock or an mocking framework. If you need a real instance later on (when the other developers have delivered) you can use some design pattern like factory pattern or others for the creation. Try to avoid the new keyword if you want to change implementations later on.
I have been looking at creating a common logging library for the company I work for, based on a blog by Daniel Cazzulino. so we can switch one out for another without to much disruption.
The first library I looked to use is log4net, but I cannot work out how or where you would setup a call to the XmlConfigurator.
I have tried adding an assembly on the project being logged, which kind of defeats the object of the exercise I feel, but that doesn't appear to work any way.
I have tried adding it as an assembly of the log4net Logging library, but that doesn't appear to work.
I have also tried calling log4net.Config.XmlConfigurator.Configure(); from the TraceManager.Get method, but all the log options (IsDebugEnabled, IsWarnEnabled, ...) are disabled.
public partial class TracerManager : ITracerManager
{
/// <summary>
/// Gets a tracer instance with the specified name.
/// </summary>
public ITracer Get(string name)
{
log4net.Config.XmlConfigurator.Configure();
var logger = LogManager.GetLogger(name);
return new Log4NetAdapter(logger);
}
/// The rest
}
Do I need to do something else?
Does the app config need to be in in the logging library?
[Edit 1]
Feel very silly....
I'd added [assembly: XmlConfigurator(Watch = true)] to my Logging.Log4Net library, but I wasnt instantiating the TracerManager in my application on the tests I was performing... ID-10Tango issue
I'd added [assembly: XmlConfigurator(Watch = true)] to my Logging.Log4Net library, but I wasnt instantiating the TracerManager in my application on the tests I was performing...
ID-10Tango issue
OK, now I'm really confused.
I originally had this problem, which is, according to posters, an issue with the version of Castle.DynamicProxy that's ILMerged into the latest Rhino.Mocks library. It has, according to several authorities on the subject, been fixed in the latest Castle, but that library has not made it into a new Rhino.Mocks. Most people are saying "just download the Rhino source and the latest Castle and build your own version".
So, I did exactly that; I grabbed a ZIP of the Rhino trunk source from Ayende's GitHub, opened it up, and built it. Then, like a good little TDDer, I created a unit test to make sure my changes worked (because the latest Castle folds DynamicProxy into Core, requiring some significant referencing changes):
[Test]
public void MockOfInterfaceMethodWithInterfaceGTR()
{
var mock = mocks.DynamicMock<ITestRestrictedInterface>();
Assert.NotNull(mock);
Expect.Call(mock.TestMethod(new Object2())).IgnoreArguments().Return(5);
mocks.ReplayAll();
Assert.AreEqual(5, mock.TestMethod(new Object2()));
}
...
internal interface ITestGenericInterface<TRest> where TRest:IObject1
{
int TestMethod<T>(T input) where T : TRest;
}
internal interface ITestRestrictedInterface:ITestGenericInterface<IObject2> { }
internal interface IObject1 { }
internal interface IObject2:IObject1 { }
internal class Object2:IObject2 { }
The result, when run in my own production code with the latest released Rhino? Failure with the following message:
System.TypeLoadException : Method 'TestMethod' on type
'ITestRestrictedInterfaceProxy83ad369cdf41472c857f61561d434436' from
assembly 'DynamicProxyGenAssembly2, Version=0.0.0.0, Culture=neutral,
PublicKeyToken=null' tried to implicitly implement an interface method
with weaker type parameter constraints.
...However, when I copy and paste this test into a fixture in the Rhino.Mocks.Tests project, without making any changes to referenced libraries, the test PASSES. I have made zero changes to the downloaded source. I have made ZERO changes to the test method and related interfaces/objects on both sides. I built a new Rhino.Mocks DLL (without IL-merging the Castle libs) and copied it with Castle libs back to my production solution, re-ran the test, and it still fails with the same message.
WTF?
I'm not a Castle expert nor compiler guru, but I believe the issue is a little bit of magic that is hidden inside the RhinoMocks.Tests assembly:
From https://github.com/ayende/rhino-mocks/blob/master/Rhino.Mocks.Tests/TestInfo.cs
using System.Runtime.CompilerServices;
using Rhino.Mocks;
[assembly: InternalsVisibleTo(RhinoMocks.StrongName)]
And for completeness sake, RhinoMocks.StrongName is defined as:
/// <summary>
/// Used for [assembly: InternalsVisibleTo(RhinoMocks.StrongName)]
/// Used for [assembly: InternalsVisibleTo(RhinoMocks.NormalName)]
/// </summary>
public static class RhinoMocks
{
/// <summary>
/// Strong name for the Dynamic Proxy assemblies. Used for InternalsVisibleTo specification.
/// </summary>
public const string StrongName =
"DynamicProxyGenAssembly2, PublicKey=0024000004800000940000000602000000240000525341310004000001000100c547cac37abd99c8db225ef2f6c8a3602f3b3606cc9891605d02baa56104f4cfc0734aa39b93bf7852f7d9266654753cc297e7d2edfe0bac1cdcf9f717241550e0a7b191195b7667bb4f64bcb8e2121380fd1d9d46ad2d92d2d15605093924cceaf74c4861eff62abf69b9291ed0a340e113be11e6a7d3113e92484cf7045cc7";
/// <summary>
/// Normal name for dynamic proxy assemblies. Used for InternalsVisibleTo specification.
/// </summary>
public const string NormalName = "DynamicProxyGenAssembly2";
/// <summary>
/// Logs all method calls for methods
/// </summary>
public static IExpectationLogger Logger = new NullLogger();
}
I've seen a similar issue when using Moq, which has this issue documented.
The problem is that DynamicProxy in Castle needs to dynamically derive a new type but does not have visibility to see your interface which is internal to your assembly. Simply adding the InternalsVisibleTo to the DynamicProxyGenAssembly2 to your test library should solve the problem.
We are running a webforms project at my company and I have an HttpModule that I need to resolve dependencies for.
We use the Ninject.Web library to resolve dependencies for master pages, pages, user controls, web services, and HttpHandlers. All these have base classes you can inherit from in the Ninject.Web Namespace:
MasterPageBase
PageBase
WebServiceBase
HttpHandlerBase
And a custom one we added since for some odd reason it wasn't there: UserControlBase
However I am unable to find a HttpModuleBase. There is a NinjectHttpModule, but that is not a base class, it is a real module that tries to eliminate the need to inherit from base classes in pages and user controls, but it has some bugs and we are not using it.
What is the best way to resolve my dependencies in my HttpModule?
When I google this I come up with this question on the first page -_-
Phil Haack blogged about a way to do this that makes it possible to use constructor injection and thereby avoid making your HttpModule depend directly on Ninject. In a standard NinjectHttpApplication, do the following:
Step 1
Use Nuget to find and add the HttpModuleMagic package to your web project.
Step 2
Write your HttpModule to use constructor injection:
public class MyHttpModule : IHttpModule
{
public MyHttpModule(ISomeService someService) {...}
}
Step 3
Remove the http module from your web.config:
<httpModules>
<!-- Modules will be defined via DI bindings -->
</httpModules>
Step 4
Set up bindings:
Bind<IHttpModule>().To<MyHttpModule>();
// Repeat the pattern above for any other modules.
I'm kind of amazed that nobody has answered this all day! Looks like I stumped you guys :)
Well, I solved the issue. I wrote my own custom implementation of IHttpModule and compiled it into the Ninject.Web assembly myself. Here is the source of the base class I added:
namespace Ninject.Web
{
public class HttpModuleBase : IHttpModule
{
/// <summary>
/// This method is unused by the base class.
/// </summary>
public virtual void Dispose()
{
}
/// <summary>
/// Ininitialize the module and request injection.
/// </summary>
/// <param name="context"></param>
public virtual void Init(HttpApplication context)
{
RequestActivation();
}
/// <summary>
/// Asks the kernel to inject this instance.
/// </summary>
protected virtual void RequestActivation()
{
KernelContainer.Inject(this);
}
}
}
I simply modeled it after the other base classes in the Ninject.Web assembly. It appears to be working wonderfully. Just make your HttpModule inherit from Ninject.Web.HttpModuleBase and then you are free to use property injection within your module, like this:
public class AppOfflineHttpModule : HttpModuleBase
{
[Inject]
public IUtilitiesController utilitiesController { get; set; }
...
}