I am currently working on a solution in visual studio which I am going to deploy on windows azure. The solution consists of a MVC3 web role where data from SQL Server is represented.
I've already set up Ninject and Fluent NHibernate to decouple the views from the database and the concrete implementations.
What I'd like to do now, is to put all the data access logic (i.e. NHibernate along with the repositories for data access) into a separate c# project so that I can reuse this project for the MVC3 project AND a future SOA project.
Any ideas how I may achieve this? I figured out how to do it with "code-only" classes, but a handful issues come up when using hibernate. I guess it's related to the ISession object in the global.asx which is created for each web request.
Any hints to simple implementations, scenarios or how-to's would be very helpful. If needed I can post code as well - just let me know which parts in detail ;-)
Regards,
Martin
If I understand correctly, I don't see a problem implementing a Service class library that gets requests from either the UI or the SOA layer and executes them. for example-
public class UsersService
{
private ISession _Session;
// Session is injected
public UsersService(ISession session)
{
_Session = session;
}
public IEnumerable<User> GetAllUsers()
{
return _Session.QueryOver<User>().List();
}
}
Related
In book "Dependency Injection in .Net" by Mark Seemann, in second chapter, is analysis of some badly written 3-layer asp.net application. The main point is: application fails because the lowest layer, data access, can not be converted from SQL with Entity Framework to Azure with no-SQL database. Here is exact quotation:
To enable the e-commerce application as a cloud application, the Data Access library
must be replaced with a module that uses the Table Storage Service. Is this possible?
From the dependency graph in figure 2.10, we already know that both User Interface and Domain libraries depend on the Entity Framework-based Data Access library.
If we try to remove the Data Access library, the solution will no longer compile,
because a required DEPENDENCY is missing.
In a big application with dozens of modules, we could also try to remove those
modules that don’t compile to see what would be left. In the case of Mary’s application, it’s evident that we’d have to remove all modules, leaving nothing behind.
Although it would be possible to develop an Azure Table Data Access
library that mimics the API exposed by the original Data Access
library, there’s no way we could inject it into the application.
Graph 2.10:
My question is - why module that is imitating previous behavior can not be injected into application, and what that really mean? Is it related to Azure specifics? I have not been working much with no-sql database before.
Essentialy, what he means is that your UI code is directly dependent on the code in the Data Access library. An example of how this might be used in the UI-layer:
public class SomeController : Controller
{
[Route("someRoute")]
[HttpGet]
public ViewResult SomeRoute()
{
// Here we're using the data component directly
var dataComponent = new DataAccessLayer.DataComponent();
return View(dataComponent.GetSomeData());
}
}
If we want to swap out the DataAccess-library it means we would have to go into all our controllers and change the code to use the new component (unless we create exactly the same class names in the same namespaces, but that's unlikely).
On the other hand, we could also write the controller like this:
public class SomeController : Controller
{
IDataComponent _dataComponent;
public SomeController(IDataComponent dataComponent)
{
_dataComponent = dataComponent;
}
[Route("someRoute")]
[HttpGet]
public ViewResult SomeRoute()
{
// Now we're using the interface that was injected
return View(_dataComponent.GetSomeData());
}
}
By defining the class like this, we can externally specify which concrete class that implements the IDataComponent interface should be injected into the constructor. This allows us to "wire" our application externally. We're injecting a concrete class into a class.
Dependency Injection is one way to make it easier to "program against an interface, not a concrete class" .
The example Mark Seemann gives relates to databases vs Azure Table Storage, but it's just that, an example. This is not related to NoSql (or storage mechanisms in general). The same principles apply for everything that depends on other classes (generally service-type classes).
EDIT after comments:
It's indeed true that you could just modify the internals of the DataComponent (or repository if that's what you're using).
However, using DI (and programming against an interface in general) gives you more options:
You could have various implementations at the same time and inject a different implementation depending on which controller it is (for example)
You could reuse the same instance in all your controllers by specifying the lifecycle in the registration (probably not usable in this case)
For testing purposes, you could inject a different implementation into the controller (such as a mock, which you can test for invocations)
I am trying my hands on creating an ASP.NET MVC 5 app without entity framework.
I have some existing database, but do not want to use Entity Framework for that. Came up with simple and uncluttered architecture having Entities, Repository and DAL.
I have created a controller passing Repository context to it.
public class EmployeeController : Controller
{
private readonly IEmployeeRespository repository;
public EmployeeController(IEmployeeRespository _repository)
{
repository = _repository;
}
// GET: Employee
public ActionResult Index()
{
IEnumerable<Employee> Employees = repository.GetEmployees();
return View(Employees);
}
}
Issue here is, I have not created a parameterless contructor for this controller. Now how do I pass my repository context to this controller. I am missing out some step, but not able to figure out.
Also, if anyone know of any downloadable sample application for such scenario, it will be of great help.
Dependency injection is your answer. there are some libraries that will do it for you. You can also do poor-mans injection yourself, or with a service locator.
You can use autofac or ninject that will orchestrate your dependency resolution.
This would help: How do I properly register AutoFac in a basic MVC5.1 website?
I looked at using a Repository design pattern to use with an MVC 5 Application I have been working on, but unfortunately it looked like a major rework of my MVC application, basically I would have to start from scratch again with this application. I found it would be far easier to maintain the MVC application by leaving Entity Framework models intact, even though that slows down the MVC application, my resolution is to have the MVC application run in a virtualized server with more computing resources added to speed up the application. more resources from its current level.
Entity Framework Models are far easier to maintain than using a Repository design pattern, if the application is slow because the EF models have many sub-models as virtual properties, that is ok, the easy solution to the problem is to have a more powerful server running the application, more RAM, faster CPU's, more computing resources, etc.
From my point of view, using a Repository adds far more layers of complexity to an application and makes it more difficult to maintain.
I am planning on building a single page application(SPA) using RavenDB as my data store.
I would like to start with the ASP.NET Hot Towel template for the SPA piece.
I will remove the EntityFramework/WebApi/Breeze components and replace with RavenDB for storage and ServiceStack for building the backend API.
Most current opinions seems to frown upon using any sort of repository or additional abstraction on top of RavenDB and call for using the RavenDB API directly inside of controllers(in an MVC app)
I am assuming I should follow the same wisdom when using Raven with ServiceStack and make calls against IDocumentSession directly inside of my service implementations.
My concern lies with the fact that it seems my service implementation will become rather bloated by following this path. It also seems that I will often need to write the same code multiple times, for example, if I need to update a User document within several different web service endpoints.
It also seems likely that I will need to access Raven from other (future) pieces of my application. For example, I may need to add a console application that processes jobs from a queue in the future, and this piece of the app may need to access data within Raven...but from the start, my only path to Raven will be through the web service API. Would I just plan to call the web api from this theoretical console app? Seem inefficient if they are potentially running on the same hardware.
Can anyone offer any advice on how to utilize Raven effectively within my webservices and elsewhere while still following best practices when using this document store? It would seem practical to create a middle business logic tier that handles calls against raven directly...allowing my webservices to call methods within this tier. Does this make sense?
EDIT
Can anyone provide any recent samples of similar architecture?
FWIW, we're currently working on an app using ServiceStack and RavenDB. We're using a DDD approach and have our business logic in a rich Domain Layer. The architecture is:
Web App. Hosts the web client code (SPA) and the service layer.
Service Layer. Web services using ServiceStack with clean/fairly flat DTOs that are completely decoupled from the Domain objects. The Web Services are responsible for managing transactions and all RavenDB interaction. Most 'Command-ish' service operations consist of: a) Load domain object(s) (document(s)) identified by request, b) Invoke business logic, c) Transform results to response DTOs. We've augmented ServiceStack so that many Command-ish operations use an automatic handler that does all the above without any code required. The 'Query-ish' service operations generally consist of: a) Executing query(ies) against RavenDB, b) Transforming the query results to response DTOs (in practice this is often done as part of a), using RavenDB during query processing/indices/transformers). Business logic is always pushed down to the Domain Layer.
Domain Layer. Documents, which correspond to 'root aggregates' in DDD-speak, are completely database agnostic. They know nothing of how they are loaded/saved etc. Domain objects expose public GETTERs only and private SETTERs. The only way to modify state on domain objects is by calling methods. Domain objects expose public methods that are intended to be utilised by the Service Layer, or protected/internal methods for use within the domain layer. The domain layer references the Messages assembly, primarily to allow methods on our domain objects to accept complex request objects and avoid methods with painfully long parameter lists.
Messages assembly. Standalone assembly to support other native .Net clients such as unit-tests and integration tests.
As for other clients, we have two options. We can reference ServiceStack.Common and the Messages assembly and call the web services. Alternatively, if the need is substantially different and we wish to bypass the web services, we could create a new client app, reference the Domain Layer assembly and the Raven client and work directly that way.
In my view the repository pattern is an unnecessary and leaky abstraction. We're still developing but the above seems to be working well so far.
EDIT
A greatly simplified domain object might look something like this.
public class Order
{
public string Id { get; private set; }
public DateTime Raised { get; private set; }
public Money TotalValue { get; private set; }
public Money TotalTax { get; private set; }
public List<OrderItem> Items { get; private set; }
// Available to the service layer.
public Order(Messages.CreateOrder request, IOrderNumberGenerator numberGenerator, ITaxCalculator taxCalculator)
{
Raised = DateTime.UtcNow;
Id = numberGenerator.Generate();
Items = new List<OrderItem>();
foreach(var item in request.InitialItems)
AddOrderItem(item);
UpdateTotals(taxCalculator);
}
private void AddOrderItemCore(Messages.AddOrderItem request)
{
Items.Add(new OrderItem(this, request));
}
// Available to the service layer.
public void AddOrderItem(Messages.AddOrderItem request, ITaxCalculator taxCalculator)
{
AddOrderItemCore(request);
UpdateTotals(taxCalculator);
}
private void UpdateTotals(ITaxCalculator taxCalculator)
{
TotalTax = Items.Sum(x => taxCalculator.Calculate(this, x));
TotalValue = Items.Sum(x => x.Value);
}
}
There's two main parts to consider here.
Firstly, as you have already noted, if you go by the word of the more fanatical RavenDB fans it is some mythical beast which is exempt from the otherwise commonly accepted laws of good application design and should be allowed to permeate throughout your application at will.
It depends on the situation of course but to put it simply, if you would structure your application a certain way with something like SQL Server, do the same with RavenDB. If you would have a DAL layer, ORM, repository pattern or whatever with a SQL Server back-end, do the same with RavenDB. If you don't mind leaky abstractions, or the project is small enough to not warrant abstracting your data access at all, code accordingly.
The main difference with RavenDB is that you're getting a few things like unit of work and the ORM for 'free', but the overall solution architecture shouldn't be that different.
Second, connecting other clients. Why would a console app - or any other client for that matter - access your RavenDB server instance any differently to your web site? Even if you run the server embedded mode in your ASP.NET application, you can still connect other clients to it with the same RavenDB.Client code. You shouldn't need to touch the web service API directly.
I am trying to get my hands on MVC. I am from ASP.Net background.
After creating new mvc 3 application, i got Controller, Models and views under the same webapp project. In ASP.Net, we generally create separate projects for Models and Controllers (which i assume are same as Business Layer). Also i created a separate project for DAL where i will be using EF.
I am confused as is this the ideal solution structure? Should we not create separate projects for each layer? Since i created DAL as a separate project, i had to put a reference of WebApp in it because i wanted to return Model from the DAL and because of that now i am not able to add a reference of DAL to my WebApp.
Can someone please throw some light on what am i missing here? Am i not doing it right?
MVC really leaves the "M" part up to the developer.
Even in their official examples you'll see variations. Your question exposes one of the most common misconceptions about MVC. You should NOT bind your domain or data models directly to views, nor should your controller methods accept them as parameters. See this post on over and under-posting.
Ideally, your controllers will call out to a DAL, and some mechanism will map those Data or Domain models to View models. It is those View models - models that exist specifically to facilitate the UI - that should exist in the WebApp "Models" folder.
So, you were definitely on the right track creating a new assembly to contain your DAL. One of the "easiest" mechanisms for mapping to a ViewModel is a simple method on each ViewModel:
public class MyWidgetFormModel()
{
public string Name { get; set; }
public string Price { get; set; }
public MapFromDAL(DAL.Widget widget)
{
this.Name = widget.Name;
this.Price = widget.Price;
}
}
Update: based on your comments, here is an excellent answer about one user's project layout.
When I started with MVC i followed the Jeffrey Palermo onion architecture. You can read about it :
here : http://jeffreypalermo.com/blog/the-onion-architecture-part-1/
here : http://jeffreypalermo.com/blog/the-onion-architecture-part-2/
and here : http://jeffreypalermo.com/blog/the-onion-architecture-part-3/
It's using a IoC support for decoupling services. I think that you should consider usage of IoC containers because the MVC architecture was thought around patterns using IoC in order to decouple services (layers).
You can also dowload a working sample from http://codecampserver.codeplex.com/ using onion architecture.
It's not the only architecture you can use with MVC but it's a very good place to start and to learn about IoC and decoupling in MVC applications.
I'm new to the MVC framework and have just run through the NerdDinner sample project. I'm loving this approach over form-based asp.net.
I'd like to spin of a more sizable side project using this same approach. Do you see anything in that project that would prevent me from enlarging the basic structure to a more complex website?
Examples of things that make me wary:
1) The NerdDinner sample accesses a db of only two tables, my db has around 30.
2) The NerdDinner project uses the LinqToSQL classes directly... all the way from the model, through the controller, to the view... is that kosher for a larger project?
Do you see any other parts of the NerdDinner framework that might cause me future grief?
I agree with others that the model should be the only place you use linq2sql and my little addendum to that is only use linq2sql in models in small projects. For larger sites it might be worth the overhead to create a separate Web Service project that does all the talking to the database and utilize the web service in your Model.
I never fully checked out the Nerd Diner example but other best practices include Typed Views and using a datamodeler that allows for easy validation (see xval or the DataAnnotations model binder). To me these are 2 of the most important best practices/
Stephen Walter has alot of excellent tips on his website that are worth checking out and taking into account when setting up a new MVC project.
I would add a service layer between the repositories and controllers. The service layer will contain all of your business logic leaving your controllers to deal mainly with processing form inputs and page flow.
Within the repositories I map LinqToSql classes and fields to domain models and then use the domain models within the service layer, controllers and views. For a larger system the extra layers will prove their worth in the long run.
There's alot of debate around the internet when it comes to the Linq to Sql classes. Some feel that it's not enough abstraction when you use the classes directly, and some feel that that's what they're there for. At work we starting revamping our site, and we're using MVC. The way we decided to go was basically each one of the LINQ to SQL classes implements an interface. IE:
public partial class LinqToSqlClass //generated class
{
public int Id{get;set;}
}
interface ILinqToSqlClass
{
int Id{get;set;}
}
public partial class LinqToSqlClass : ILinqToSqlClass
{
}
This is just a very small part of it. We then have a repository that gets you any of these generated class, but only as that of their interface type. This way, we're never actually working directly with the Linq to Sql classes. There are many many different ways to do this, but generally I would say yes, if you're dealing with a large database (especially if the schema may change) or if you're dealing with data that may come from more than one source, definitely don't use the classes directly.
Bottom line is, there's alot of good info in that Nerd Dinner chapter, but when creating your own project, you'll obviously run into issues of your own so take it as you go.
The Nerd Dinner text makes the claim that the MVC framework can equally well accommodate other common data abstractions. (It's true.) It sounds like your organization already has one it likes. A good learning strategy would probably be to adapt one to the other.