Dependency injection using compile-time weaving? [closed] - c#

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I just tried to learn about PostSharp and honestly I think it's amazing.
But one thing that it is difficult for me how a pure dependency injection (not service locator) cannot be done in PostSharp aspects, perhaps in my understanding as a result of compile time weaving.
Came from PHP background, Symfony has JMSAopBundle which still allows dependency to be injected to it's Interceptor.
Does .Net have some libraries with same capability?
Or am I missing something with PostSharp?

I don't think you're missing anything here and the limitation is indeed the result of using compile time weaving.
Although I think compile time weaving tools have its place in software development, I feel that they are often overused. Often I see them being used to patch flaws in the application design. In the applications I build I apply generic interfaces to certain architectural concepts. For instance, I define:
an ICommandHandler<TCommand> interface for services that implement a certain use case;
an IQueryHandler<TQuery, TResult> interface for services that execute a query;
an IRepository<TEntity> interface as abstraction over repositories;
an IValidator<TCommand> interface for components that execute message validation;
and so on, and so on.
This allows me to create a single generic decorator for such group of artifacts (for instance an TransactionCommandHandlerDecorator<TCommand> that allows running each use case in its own transaction). The use of decorators has many advantages, such as:
Those generic decorators are completely tool agnostic, since there is no reference to a code weaving tool or interception library. PostSharp aspects are completely dependent on PostSharp, and interceptors always take a dependency on an interception framework, such as Castle.DynamicProxy.
Because a decorator is just a normal component, dependencies can be injected into the constructor and they can play a normal role when you compose your object graphs using Dependency Injection.
The decorator code is very clean, since the lack of dependency with any third-party tool.
Because they're tool agnostic and allow dependency injection, decorators can be unit tested easily without having to revert to special tricks.
Application code that needs cross-cutting concerns to be applied can as well be tested easily in isolation, because decorators are not weaved in at compile time. When decorators are weaved in at compile time, you're always forced to do a integration style of testing of your application code, or need to revert to special build tricks to prevent them from being applied in your unit test project.
Decorators can be applied dynamically and conditionally at runtime, since there's no compile time code weaving going on.
Performance is identical (or even faster) than with code weaving, because there's no reflection going on during object construction.
There's no need to mark your components with attributes to note that some aspect must be applied. This keeps your application code free of any knowledge of such cross-cutting concern and makes it much easier to replace this.
A lot has been written about this kind of application design; here are a few articles I wrote myself:
Meanwhile... on the command side of my architecture
Meanwhile... on the query side of my architecture
Writing Highly Maintainable WCF Services
Chapter 10, Aspect-Oriented Programming by Design, of my book Dependency Injection Principles, Practices, Patterns contains a very detailed discussion on this type of design.
UPDATE
Decorators are great, but what I like about AOP is it's concept of
advice and join points. Is there a way to simulate the same capability
with decorator? I could only think of reflection right now.
A Join Point is a "well defined location within a class where a concern is going to be attached". When you apply AOP using decorators, you will be 'limited' to join points that are on the method boundaries. If however you adhere to the SRP, OCP and ISP, you will have very thin interfaces (usually with a single method). When doing that, you will notice that there is hardly ever a reason for having a join point at any other place in your classes.
An Advice is a "concern which will potentially change the input and/or output of the targeted method". When working with decorators and a message-based design (the thing I'm promoting here), your Advice needs to change the message (or replace the complete message with altered values) or change the output value. Things aren't that much different than with code weaving—if you apply an Advice, there must be something in common between all code the Advice is applied to.

Related

Is it good practice to use NSubstitute(or other testing frameworks that allow mocking) in the code(not in the tests)? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I faced with usage of NSubstitute inside business logic(outside of test classed):
var extension = Substitute.For<IExtension>();
I get used to utilization of NSubstitute inside test classes, when you need to mock some class(interface). But using NSubstitute outside of test classes confused me. Is it correct place for it? Is is correct to use NSubstitute like dependency injection container, that can create of instance of interface/class?
My concerns are that NSubstitute was designed to be used for tests. Performance inside tests is not very important thing, that is why it could be slow. Also, it relies on reflection, so it could not be very quick. But, is performance of NSubstitute poor, or is it ok?
Are there any other reasons, why NSubstitute or other mocking libraries should not be used outside of tests?
No, it is not generally good practice to use a mocking library in production code. (I'm going to use "generally" a lot here as I think any question on "good practice" will require a degree of generalisation. People may be able to come up with cases that work against this generalisation, but I think those cases will be the vast minority.)
Even without performance considerations, mocking libraries create test implementations for interfaces/classes. Test implementations generally support functions such as recording calls made and stubbing specific calls to return specific results. Generally when we have an interface or class for production code, it is to achieve some specific purpose, not the general purpose of recording calls and stubbing return values.
While it would be possible to provide a specific implementation for an interface using NSubstitute and to stub each call to execute production code logic, why not instead create a class with the required implementations instead?
This will generally have these advantages:
should be more succinct to implement (if not, consider switching to a better language! :D)
uses the native constructs of your programming language
should have better performance (removes levels of indirection required for mocking library)
For NSubstitute specifically there are some big reasons why you should never use it in production code. Because the library is designed for test code it uses some approaches that are unacceptable for production code:
It uses global state to support its syntax.
It abuses C#/VB syntax for the purpose of testing (can almost be considered a testing DSL). e.g. say sub.MyCall() returns an int. Stubbing a call like sub.MyCall().Returns(42) means we are calling int.Returns(42), which is now somehow going to influence a return value of a call outside of the int on which it is being called. This is quite different to how C#/VB generally works.
It requires virtual members for everything. This constraint is shared by many mocking libraries. For NSubstitute, you can get unpredictable results if you use it with non-virtual members. Unpredictability is not a nice thing to have for production code.
Tests are generally short-lived. NSubstitute (and probably other libraries) can make implementation decisions that rely on short-lived objects.
Tests show pass or failure for a particular case. This means if there is a problem in NSubstitute it can be immediately picked up while attempting to write and run a specific test. While a lot of effort goes into NSubstitute quality and reliability, the amount of work and scrutiny the C# compiler goes through is a completely different level. For production code, we want a very stable base to use as a foundation.
In summary: your programming language provides constructs designed and optimised for implementing logical interfaces. Your mocking library provides constructs designed and optimised for the much more limited task of providing test implementations of logical interfaces for use with testing code in isolation from its dependencies. Unless you have an ironclad reason as to why you would do the programming equivalent of digging a hole with a piece of paper instead of a shovel, I'd suggest using each tool for its intended purpose. :)

Injection or creating instance with new() [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm a C# programmer and I'm thinking about dependency injection. I've read "Dependency injection in .NET" book and patterns and antipatterns of DI are quite clear for me. I use pattern injection in constructor most of the time. Are there any cases in wich it is preferrable to create instances directly rather than using a Dependency Injection framework?
Using Dependency Injection has the advantage of making code Testable, however abusing DI pattern makes code harder to understand. Take in example this framework (ouzel, I'm not affiliated in any way. I just liked the way it was designed) wich I started recently to follow, as you see most classes have dependencies injected, however there is still a single instance shared without constructor injection sharedEngine.
In that particular case I find the author did a good choice, that makes the code overall simpler to understand (simpler constructors, less members) and eventually more performant (you don't have a shared pointer stored in every instance of every class of the engine).
Still its code can be tested because you can replace that instance (global) with a mock (the worst point of globals is that initialization order and their dependencies are hard to track, however if you limit to few globals with no or few dependencies this is not a problem). As you see you are not always forced to inject everything from constructor (and I wrote a DI injection framework for C++).
The problem is that people think is always good injectin everything from constructor so you suddendly start seeing frameworks that allow to inject everything (like int or std::vector<float>) while in reality that's the worst idea ever (infact in my simple framework I allow just to inject classes) since code becomes harder to understand because you are mixing configuration values with logic configuration and you have to travel through more files to get a grasp of what code is doing.
So, constructor injection is very good, use it when it is proper, but it is not the Jack-of-all-trades like everything in programming you have to not abuse it. Best of all try to understand good examples of every programming practice/pattern and then roll your own recipe, programming is made of choices, and every choice have good and bad sides.
When is it Ok (and by "OK" I mean you will still be able to test the code, as it were not coupled to concrete instances) to call "new":
You need Polymorphis, most times it is easier to create the new class than configuring that using a DI framework
You need a object factory, usually the factory itself is injected, however the factory code call "new" explicitly
You are calling "new" in the main
The object you are creating with "new" has no dependencies, and thus using it inside a class does not make the class harder to test (in example you create standard .NET containers with new, doing otherwise results in much more confusion)
The object you are creating is a global instance wich do not rely on order of initialization and its dependencies are not visible otherelse (you can mock the instance as long as you access it through a interface).
The above list provide situations in wich even when using a DI framework (like Ninject) it is ok to call "new" without removing the possibility to test your code, even better, most times you use DI in the above cases you usually end up with more complex code.

Is the lack of "objects" in Thrift awkward? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Note: I know the question title is suboptimal. Feel free to improve.
Thrift enables serialization as well as RPC. However, unlike systems like COM or CORBA or ZeroC ICE, ... it does not have the notion of a remote object or remote interface in a polymorphic way, therefore all services defined in a Thrift infrastructure are just collections of functions.
Thrift Features
Thrifts Non-Features state (interface?) polymorphism as a non-goal which is fair enough, but ...
As a programmer in languages that make natural use of objects in that I can have functions that return other objects (or interface-references), not just structs, this appears to be a bit awkward in that this would mean that all "object" functionality in a thrift service would have to be provided by functions additionally taking handles as input parameters to define what is being operated on -- a bit like doing OO in C :-)
Imagine a thrift service operating on files. It's interface would look much more like what C has (fopen etc.) than what we use today in C++, C# or possibly even Python.
Of course one could write additional wrappers in the target language, but you don't have any support from the Thrift framework, so that's what I'd call "awkward".
Phrasing it another way: Is dropping back to a purely procedural interface on the remote service level an issue?
To give this yet another twist: Even when I use the REST interface of, say, Jenkins, the URL based interface I have feels slightly "OO", as I access job objects by URL name and then specify the operations on them by GET parameters. That is, to me, it seems a string based REST approach can capture operations on resources (objects or interfaces if you like) much more naturally than a purely procedural interface. It is totally ok for Thrift to define that out of scope but it be good to know whether users find it a noticable thing.
This is a question to active Thrift users: Is the problem I describe above an actual problem in day to day use? Is it an observed problem at all?
Is this a general "problem" with SOA?
My impression is, that you mix concepts in an incorrect way and then try to draw conclusions from that.
RPC is nothing more than a remote procedure call. This means exactly that: Calling a remote piece of code, passing some arguments and getting some results. That's all. How to interpret these data is an entirely different thing.
In an OOP context, every method call (including RPC, but not limited to) is a procedure/function call with an additional hidden argument typically called this or Self. What really distinguishes an object from non-OOP code is the ability to do information hiding, derive classes and override methods, and some other nice stuff. Behind the scenes everything is just data, which becomes painfully obvious when you have to de/serialize your objects into e.g. a database - in most of the cases you will use an ORM of some kind for that task. An RPC mechanism is on an equivalent plane. What frameworks like COM or CORBA do behind the scenes is nothing else, they just hide it better from you.
At least with COM, you are not dealing with objects. You are interacting with interfaces, which are typically implemented as objects. It is hard to tell whether or not a particular interface is part of the object, or if it is added by aggregation or composition. Even the opposite can be true: It may be the case, that two otherwise unrelated interfaces may be implemented by the very same object instance for some reason. Interfaces have more in common with services than they have with objects.
SOA is not limited to RPC. For example, a REST-based interface is not considered RPC by a lot of people (although one can argue that point) and does not offer any objects that would deserve the name, yet you can do SOA with REST. And of course, SOA is also not limited to neither COM or CORBA environments, nor to SOAP or XML-RPC interfaces. SOA is primarily about services (hence the name), not objects. To put it into one sentence, RPC, OOP and SOA are different concepts, and comparing them to each other is what is called a category mistake.
How the server and client code represent your data depends on the system used and the traits of the target language. Don't let yourself be confused by the naming of the IDL entity - a struct in the IDL is not necessarily a struct in code. For example, using Thrift and C# as the target language, you get neat partial class-es generated from a struct, easily extendable with some manually written code. This may be different with another target language (say plain C or the Go language) and another system like Protobuf, Avro or the REST client of your choice.

Building out a 3rd Party API/SDK [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Overview
Over the last 3 years we've built a full-featured software package in C#
Our software was architected in such a way that it handles a lot of the low-level plumbing required for the application so that our developers can focus on the specific problem they are trying to solve rather than all the minutiae. This has improved development and release times significantly
As such, the code is broken out into various projects to give us logical separation (e.g. a front-end MVC app, a services layer, a core framework layer, etc)
Our core framework project has a lot of functionality built into it (the main 'guts' of the application) and it has been carefully organized into various namespaces that would be familiar to all (e.g. Data Access, IO, Logging, Mail, etc)
As we initially built this, the intent was always for our team to be the target audience, our developers coding the various new pieces of functionality and adding to the framework as needed.
The Challenge
Now the boss wants to be able to open our codebase up to 3rd party developers and teams outside of our own company. These 3rd party folks need to be able to tap directly into our core libraries and build their own modules that will be deployed along with ours on our servers. Just due to the nature of the application it is not something we could solve by exposing functionality to them via REST or SOAP or anything like that, they need to work in an environment much like our own where they can develop against our core library and compile their own DLLs for releases
This raises many concerns and challenges with regard to intellectual property (we have to be able to protect the inner workings of our code), distribution, deployment, versioning and testing and releases and perhaps most important how we will shape the framework to best meet these needs.
What advice would you offer? How would you approach this? What kind of things would you look to change or what kind of design approach would you look to move towards? I realize these questions are very open-ended and perhaps even vague but I'm mainly looking for any advice, resources/tutorials or stories from your own background from folks who may have faced a similar challenge. Thanks!
I'm not sure the MEF answer really solves your problem. Even using Interfaces and MEF to separate the implementation from the contracts, you'll still need to deliver the implementation (as I understand your question), and therefore, MEF won't keep you from having to deliver the assemblies with the IP.
The bottom line is that if you need to distribute your implementation assemblies, these 3rd parties will have your IP, and have the ability to decompile them. There's no way around that problem with .NET, last I checked. You can use obfuscation to make it more difficult, but this won't stop someone from decompiling your implementation, just make it harder to read and understand.
As you've indicated, the best approach would be to put the implementation behind a SaaS-type boundary, but it sounds like that's out of the question.
What I will add is that I highly recommend developing a robust versioning model. This will impact how you define your interfaces/APIs, how you change them over time, and how you version your assemblies. If you are not careful, and you don't use a combination of both AssemblyVersion and AssemblyFileVersion for your assemblies, you'll force unnecessary recompiles from your API clients, and this can be a massive headache (even some of the big control vendors don't handle this right, sadly). Read up on these, as they are very important for API/Component vendors in my opinion.
NDAs and/or License Agreements are another way, as #trailmax indicates, if you feel your users will respect such agreements (individuals vs. companies may view these type of agreements differently).
Oh, also make sure that you Sign your Assemblies with a Strong Name. And to do this, you'll probably need to establish a strategy to protect your Signing Keys. This seems simple at first, but securing your signing keys adequately is not as easy as it appears at first blush. You often have to have multiple sets of keys for different environments, need to incorporate the keys into CI/CD systems, and need to insure access to the release keys is tightly held.
As #HighCore already said, implement interfaces for all the stuff you want to expose. Put them into a separate project/repository and give read-only access to the project/repository. But your interfaces must be properly documented, otherwise it might be painful for other guys.
This way your code is not really visible to them, and they can still work on it.
If that does not work-out, and you are forced to show them your code, get them to sign NDA. NDA should state that your code is yours and they can't redistribute it in any way.
I guess my answer is as vague as the question, but gives you some ideas.

What are the real-world pros and cons of each of the major mocking frameworks? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
see also "What should I consider when
choosing a mocking framework for
.Net"
I'm trying to decide on a mocking framework to use on a .NET project I've recently embarked on. I'd like to speed my research on the different frameworks. I've recently read this blog post http://codevanced.net/post/Mocking-frameworks-comparison.aspx and wondered if any of the StackOverflow audience has anything to add in the way of real-world advantages and caveats to the frameworks.
Could people could list the pros/cons of the mocking frameworks they either currently use or have investigated for their own use on .NET projects. I think this would be not only a help to me to decide for my current project, but it will help others make more informed decisions when picking the correct framework for their situation. I'm not an expert on any of the frameworks but I would like to get arguments for and against the major frameworks I've come across:
RhinoMocks
Moq
TypeMock Isolator
NMock
Moles
And other usable alternatives that I've missed. I'd also like insights from users that have switched or stopped using products because of issues.
I don't know Moles at all, but I'll cover the ones I know a bit about (I really need a table for this, though).
Moq
Pros
Type-safe
Consistent interface
Encourages good design
Cons
Not as full-featured as some of its competitors
It can't mock delegates
It can't do ordered expectations
probably other things I can't think of right now...
Can only mock interfaces and virtual/abstract members
Rhino Mocks
Pros
Type-safe
Full feature set
Encourages good design
Cons
Confusing API. There are too many different ways to do the same things, and if you combine them in the wrong way it just doesn't work.
Can only mock interfaces and virtual/abstract members
TypeMock Isolator
Pros
Type-safe (AFAIR)
Can mock anything
Cons
Very invasive
Potential Vendor Lock-In
Does not encourage good design
NMock
Pros
Encourages good design
Works on any version of .NET (even 1.1)
Cons
Not type-safe
Can only mock interfaces and virtual/abstract members
Please note that particularly the advantages and disadvantages regarding TypeMock are highly controversial. I published my own take on the matter on my blog.
I started out with NMock when that was the only option back in 2003, then migrated to Rhino Mocks because of its type safety, and now use Moq because of the simpler API.
So far I have used RhinoMocks and Moq. Moq is currently my favourite due to its simplicity which is currently all I need. RhinoMocks is pretty powerful but I have never been in the position to fully tap into it.
We've used Rhino Mocks for more than a year now.
PRO:
easy to create the mocks
can mock public and internal methods
can mock interfaces or classes
can create partial mocks (mocking only specific methods from a class)
AGAINST:
methods must be at least internal and virtual (can mess with your architecture)
difficult to use for property-by-property asserts, especially for collections of objects that get created inside the test scope - the constraints syntax gets complicated
you have to be careful when the recording stops and the playback begins
careful about what calls are being mocked (like a property call that you didn't see or a method that wasn't virtual) - the errors you may get are not very helpful
As a general note, we've found that using the mocking frameworks promotes "white box" testing (especially for unit tests). We ended up with tests that validated HOW things were done, not WHAT they were doing. They were useless for refactorings and we had to rewrite most of them.
Like Frank and Chris, I tried RhinoMocks and switched to Moq. I haven't been disappointed. See my series of blog posts:
Stubbing problems with Rhino Mocks
Mocks: The Next Generation
Mocks: The Next Generation II
Switching to Moq
EDIT: Note that I generally do state-based testing with stubs; I seldom do behavior testing with verifiable mocks.
I've not used all those frameworks, but I looked at RhinoMocks and Moq, and went with Moq because it feels more elegant and much simpler. I am using the trunk version which includes a must-have fix for the 4 argument limit imposed on callbacks in the most recent 4.0 beta release.
I especially like the default Moq behavior, which doesn't behave like a strict Mock Object failing tests when unexpected calls are made. You can configure it to do this if you want, but I find that requires me spending way too much time setting up expectations and not enough time testing.
I use TypeMock since I'm developing on SharePoint. As TypeMock can mock anything, it's proved a valuable resource when unit testing our SharePoint webparts, event recievers, workflows, etc.
On the downside, TypeMock can be expensive, however there is a version available which is specific for SharePoint and costs less then the full TypeMock package. I highly recommend it.
The one thing I do disagree with is this notion that TypeMock does not make you design your code very well. Often the classes I create, and overall code, are designed well. Just because I use TypeMock doesn't mean I sacrifice the quality of my design - I still practise IoC and SRP. Just because TypeMock can mock anything does't mean I write my code to reflect that ability.
You may want to keep in mind that if you need to support a multi-language environment (e.g. VB) all of the code configurable frameworks (I can speak to Moq and RhinoMocks directly) are going to be painful given the (lack of) anonymous delegate/lambda syntax in VB. This will be more possible in Visual Studio 2010/VB 10 but will still not be comparable to the nice C# lambda syntax.
TypeMock appears to have some support for VB

Categories