C# - Repositories versus Services in Supplying Custom Class - c#

Good day,
I want to have a clean codes that are basing on the actual enterprise level application. I know how to implement repositories and services but I'm not sure if I'm doing this right.
Suppose, I have a custom class (mostly for json result)
public class CustomClass{
public string Name { get; set; }
public string Age { get; set; }
}
I have a model class (connected to my dbcontext)
public class Employees{
public string Name { get; set; }
public string Age { get; set; }
}
Here's my 1st sample repository that supplies the Custom Class
// in repository file only
public IEnumerable<CustomClass> SupplyCustomClass(){
return context.Employees.Select(obj=>new CustomClass{
Name = obj.Name,
Age = obj.Age
}).ToList();
}
Here's my 2nd sample, the repository is supplies the first then followed by service.
// in repository file (EmployeeRepo)
public IEnumerable<Employees> SupplyEmployeeFirst(){
return context.Employees.ToList();
}
// in service file
// dependency injection from EmployeeRepo
public IEnumerable<CustomClass> SupplyCustomClassSecond(){
var customClass = new List<CustomClass>();
var employees = employeeRepo.SupplyEmployeeFirst();
foreach(employee in employees){
customClass.Add(new CustomClass{
Name = obj.Name,
Age = obj.Age
});
}
return customClass;
}
Both implementation execute the same result. But I want to learn what's the best way in order to follow the rule in enterprise development level.

From your two approaches, I think the 2nd one is better. The repository should work with Models and not with ViewModels. And as another improvement, you can implement Unit of Work Pattern with Repository pattern.

With the two approaches you have described I would recommend going with a service layer that calls into the repositories (as per your second example). This allows for a clear separation of concerns in that only the repository deals with managing data.
Something important to consider however is that in your examples the second example is wildly inefficient. Your first example will be pulling just 2 columns of data out of the database and constructing your class where as your second example is materialising the entire collection to a list and then you are working with it in memory which places strain on the DB as well as your host process.
This is where clear separation between models and view models can become messy. You can either have the repository spit out the view models to keep maintain performance or you can return IQueryable from the repository layer to work with. I personally would recommend having the repository return view models if it is a limited data set you need back but leave all other logic to be done within the service layer. This maintains the repository as the layer that deals directly with your data. In a basic example such as yours its not obvious what work you would then do within the service but there will be plenty of things you wish to do once the data has been pulled out which becomes clear as you flesh out your project. Having this separation allows you to have a clean line between what is fetching your data and the service that may then work with that data to present it, pull in data externally from other API's or other general work that has nothing directly to do with your database.
The short version of the Entity Framework performance consideration is that you want to have it pull out as little data as you can and leave it as an IQueryable for as long as possible. Once you run .ToList() or iterate over the collection Entity Framework will materialise the result set.

Related

Testing an application logic against a real relational database with relationships on entities

I'm trying to figure out what is the best method for integration testing when testing an application logic against a real relational database. I'm developing my solution in C# using Entity Framework and NUnit, but this should not be a language dependent question.
Imagine you're building an application that lets the user create Car entities and Person entities. Each Car must have a Person related to it, so basically one Person can have 0-N Car entities, and one Car can have only 1 Person entity as a FK.
The entities could look like this:
public class Person {
public int PersonId { get; set; }
public string Name { get; set; }
public string Surname { get; set; }
public List<Car> Cars { get; set; } // the navigation property
}
public class Car {
public int CarId { get; set; }
public string Make { get; set; }
public string Model { get; set; }
public int PersonId { get; set; } // the foreign key
public Person Person { get; set; } // the navigation property
}
Suppose you have a class CarRequestHandler with a method GetList that accepts a name filter and returns a list of Car entities that match that name.
Now let's focus on the integration testing aspect of this: I want to write some tests that connect to a real SQL Server database and checks if my logic works correctly and if I wrote the right queries using EF Core.
If I want to test the CarRequestHandler.GetList(string name) method, I first have to seed the database with some sample Car entities and then I can execute the test to see what the results of the invocation are. But in order to create a Car, I need to have a Person object already created that can be assigned to the Car entity.
Now, doing this by hand in every test method (or even in a setup fixture) can become really tedious, and especially cumbersome to write and maintain, since in a more complex database, the graph of the dependencies of the entity handler under test could become huge, and that could mean I may need to build an entire object graph consisting of every dependency my entity needs to exist in the SQL Server real database.
Is there some kind of tip you can give me in order to avoid having a big ol' spaghetti codebase that will make me and my team mates go "let's skip testing, we don't have time to do that"?
I hope I explained it well enough, let me know if I need to expand on anything.
The best way I know of to run against real databases in tests would be using docker. There is a library called testcontainers that can greatly simplify the setup for .net testing. The real chore is getting realistic test data. For Some things you can use faker to generate realistic looking test data. But for others you´re gonna have to manually maintain a test data set when it´s too hard to generate.
Testing against production databases is also an option for some things that don´t need to write a lot to the database, or is easy to undo / written in a way a user will never seen it. Though, your integration tests loose portability and everything that runs said tests now needs access to the production database.
If you have trouble adding data to your database you might want to reconsider how your database is structured, and what data the database should contain by default.
Consider looking at it from the applications perspective, it should be fairly simple for the application to store and retrieve objects from the database. So you should test your CarRequestHandler in a similar way it would be used by the actual application. I.e. test the interface, not the implementation. In some cases you might need to access the database directly in order to setup some specific cases, but this should ideally be fairly rare.
In some cases this might involve running a fairly large part of the application if there are many dependencies between various components. This might be easier to manage if using some kind of Dependency injection framework.
Also note the idea of "default data", your application probably needs some amount of pre-existing data to work, and you will need some system to add this data to any new system. Ideally it should be possible to add this data either from code, or by calling a script. You should add this default data to the database before running your tests. This allows the unit tests both to test the queries, and also test that various components work correctly with the default data.
Does the application need to be tested in Production or can this be done in a simulation? Is the Database distributed?

Entity framework 6 easiest way to denormalize column to avoid frequent joins

Let's assume, I have two entities.
class Author{
public int Id{get;set;}
public string Name{get;set;}
//.....
}
class Article{
public int Id{get;set;}
public int AuthorId{get;set;}
public string Text{get;set;}
}
Now I want to add to Article AuthorName property doubling existing Author.Name to simplify resulting linq queries and execution time. I'm sure that my database will be used by only one Asp.Net MVC project. What is common way to implement such a column using EF (without database triggers)?
Also here can be a bit more difficult case. Let's say I want to have TotalWordCountInAllArticles column in Author entity which calculated by Text property of Article.
You can add the AuthorName property to Article and just manually keep the integrity by making sure that any code that creates Articles or updates the Author.Name also updates all of the Articles. Same thing with TotalWordCount, and time the Article.Text changes, re-add up all of the counts from the other Articles.
There are a few patterns you could look at to make this more automatic, such as a Domain Event pattern (https://lostechies.com/jimmybogard/2014/05/13/a-better-domain-events-pattern/), but it definitely isn't just plug and play. Really depends on if these are just a couple of items or if this is going to happen frequently.
If you are frequently denormalizing data for performance, you may want to look at more of an architecture where there is a normalized DB and then a separate process which generates denormalized views on the data and put into a document store.
NOTE: This might not answer the EF part of your question but it does offer an alternative solution to your problem.
Not sure how far along you are into the development of your project but you may want to consider having a look at Drapper which would make this trivial, fast and offer a number of other benefits.
Let's assume a small change to your Article model to include the Author model.
public class Article
{
public int ArticleId { get; set; }
public string Text { get; set; }
// using Author model
public Author Author { get; set; }
}
And assuming that the SQL you'd expect to execute would be something conceptually similar to:
select article.[Id]
,article.[Text]
,article.[AuthorId]
,author.Name
from [Article] article
join [Author] author on author.AuthorId = article.AuthorId;
Implementing a repository to retrieve them with Drapper would be really trivial. It might look something like:
public class ArticleRepository : IArticleRepository
{
// IDbCommander is a Drapper construct
private readonly IDbCommander _commander;
/// <summary>
/// Initializes a new instance of the <see cref="ArticleRepository"/> class,
/// injecting an instance of the IDbCommander using your IoC framework of
/// choice.
/// </summary>
public ArticleRepository(IDbCommander commander)
{
_commander = commander;
}
/// <summary>
/// Retrieves all article instances.
/// </summary>
public IEnumerable<Article> RetrieveAll()
{
// pass the query method a reference to a
// mapping function (Func<T1, T2, TResult>)
// although you *could* pass the predicate
// in right here, the code is more readable
// when it's separated out.
return _commander.Query(Map.AuthorToArticle);
}
private static class Map
{
// simple mapping function which allows you
// to map out exactly what you want, exactly
// how you want it. no hoop jumping!
internal static Func<Article, Author, Article>
AuthorToArticle = (article, author) =>
{
article.Author = author;
return article;
};
}
}
You'd wire the SQL to the repository using the configuration available to Drapper. It supports both json and xml config files or you could configure it all in code as well if you wanted to.
I've thrown a quick sample together for you over on Github.
Why should you consider this?
There's a number of benefits to going this route:
You indicated a performance concern (execution time). Drapper is an abstraction layer built on top of Dapper - the king of high performance micro-ORM's.
You control the mapping of your objects explicitly - no weird semantics or framework quirks (like the one you're facing).
No auto generated SQL. You decide exactly what SQL will be executed.
Your SQL is separated from your C# - if your schema changes (perhaps to improve performance) there's no need to recompile your project, change your entity mapping or alter any of your domain code or repository logic. You simply update the SQL code in your configuration.
Along the same lines, you can design your service/repository layers to be more domain friendly without having to be concerned about data access concerns polluting your service layer (or vice versa).
Fully testable - you can easily mock the results from the IDbCommander.
Less coding - no need for both entities and dto's (unless you want them), no overriding OnModelCreating methods or deriving from DbContext, no special attributes on your POCO's.
And that's just the tip of the iceberg.

Entity Framework classes vs. POCO

I have a general difference of opinion on an architectural design and even though stackoverflow should not be used to ask for opinions I would like to ask for pros and cons of both approaches that I will describe below:
Details:
- C# application
- SQL Server database
- Using Entity Framework
- And we need to decide what objects we are going to use to store our information and use all throughout the application
Scenario 1:
We will use the Entity Framework entities to pass all around through our application, for example the object should be used to store all information, we pass it around to the BL and eventually our WepApi will take this entity and return the value. No DTOs nor POCOs.
If the database schema changes, we update the entity and modify in all classes where it is used.
Scenario 2:
We create an intermediate class - call it a DTO or call it a POCO - to hold all information that is required by the application. There is an intermediate step of taking the information stored in the entity and populated into the POCO but we keep all EF code within the data access and not across all layers.
What are the pros and cons of each one?
I would use intermediate classes, i.e. POCO instead of EF entities.
The only advantage I see to directly use EF entities is that it's less code to write...
Advantages to use POCO instead:
You only expose the data your application actually needs
Basically, say you have some GetUsers business method. If you just want the list of users to populate a grid (i.e. you need their ID, name, first name for example), you could just write something like that:
public IEnumerable<SimpleUser> GetUsers()
{
return this.DbContext
.Users
.Select(z => new SimpleUser
{
ID = z.ID,
Name = z.Name,
FirstName = z.FirstName
})
.ToList();
}
It is crystal clear what your method actually returns.
Now imagine instead, it returned a full User entity with all the navigation properties and internal stuff you do not want to expose (such as the Password field)...
It really simplify the job of the person that consumes your services
It's even more obvious for Create like business methods. You certainly don't want to use a User entity as parameter, it would be awfully complicated for the consumers of your service to know what properties are actually required...
Imagine the following entity:
public class User
{
public long ID { get; set; }
public string Name { get; set; }
public string FirstName { get; set; }
public string Password { get; set; }
public bool IsDeleted { get; set; }
public bool IsActive { get; set; }
public virtual ICollection<Profile> Profiles { get; set; }
public virtual ICollection<UserEvent> Events { get; set; }
}
Which properties are required for you to consume the void Create(User entity); method?
ID: dunno, maybe it's generated maybe it's not
Name/FirstName: well those should be set
Password: is that a plain-text password, an hashed version? what is it?
IsDeleted/IsActive: should I activate the user myself? Is is done by the business method?
Profiles: hum... how do I affect a profile to a user?
Events: the hell is that??
It forces you to not use lazy loading
Yes, I hate this feature for multiple reasons. Some of them are:
extremely hard to use efficiently. I've seen too much times code that produces thousands of SQL request because the developers didn't know how to properly use lazy loading
extremely hard to manage exceptions. By allowing SQL requests to be executed at any time (i.e. when you lazy load), you delegate the role of managing database exceptions to the upper layer, i.e. the business layer or even the application. A bad habit.
Using POCO forces you to eager-load your entities, much better IMO.
About AutoMapper
AutoMapper is a tool that allows you to automagically convert Entities to POCOs and vice et versa. I do not like it either. See https://stackoverflow.com/a/32459232/870604
I have a counter-question: Why not both?
Consider any arbitrary MVC application. In the model and controller layer you'll generally want to use the EF objects. If you defined them using Code First, you've essentially defined how they are used in your application first and then designed your persistence layer to accurately save the changes you need in your application.
Now consider serving these objects to the View layer. The views may or may not reflect your objects, or an aggregation of your working objects. This often leads to POCOS/DTO's that captures whatever is needed in the view. Another scenario is when you want to publish objects in a web service. Many frameworks provide easy serialization on poco classes in which case you typically either need to 1) annotate your EF classes or 2) make DTO's.
Also be aware that any lazy loading you may have on your EF classes is lost when you use POCOS or if you close your context.

How to add custom methods with database logic

I created an application with this architecture:
MyProject.Model: Contains POCO. Example:
public class Car
{
public int Id { get; set; }
public string Name { get; set; }
}
MyProject.Repositories: Contains repositories and UnitOfWork
public class UnitOfWork
{
// ...
public Repository<Car> Cars { get; set; }
// ...
}
public class Repository<T>
{
// ...
// Add / Update / Delete ...
// ...
}
MyProject.Web: ASP.Net MVC application
Now I want to find a way to interact with data by using methods. For example in MyProject.Model.Car I want to add a method that will get data with non-navigation properties, a method named `GetSimilarCars()'. The problem is that the repository cannot interact with other repositories and thus cannot perform operations on the database.
I don't really know how to do this in a simple manner and what is the best place in my architecture to put this.
Another example could be UserGroup.Deactivate(), this method would deactivate each user and send them a notification by email. Of course I can put this method in the Web application Controller but I think this is no the place to put such code that could be called in many places in the application.
Note: I am using Entity Framework.
Any suggestion on how to implement such operations?
This type of stuff goes into your DAL (essentially your unit of work and repository, in this limited scenario). However, this is a pattern that bit me when I first starting working with MVC. Entity Framework already implements these patterns; your DbContext is your unit of work and your DbSet is your repository. All creating another layer on top of this does is add complexity. I personally ended up going with a service pattern instead, which merely sits on top of EF and allows me to do things like someService.GetAllFoo(). That way, the use of Entity Framework is abstracted away (I can switch out the DAL at any time. I can even remove the database completely and go with an API instead, without having to change any code in the rest of my application.) but I'm also not just reinventing the wheel.
In a service pattern, you're specifically only providing endpoints for the things you need, so it's a perfect candidate for things like GetSimilarCars, as you simply just add another method to the service to encapsulate the logic for that.
I would assume that your Business Layer (BL) would be communicating with your Data Access Layer (DAL). That way from your BL you could reach out to different repositories in DAL. That would solve your problem of repositories not being able to share data (that data would be shared through BL).
See here: N-tier architecture w/ EF and MVC
I did not quite get you question but this is how you assign the values. And add it into a collection
public class Repository<T>
{
List<car> _lstCar=new List<car>();
//Add
car cobj=new car();
cobj.Id="1234";
cobj.Name="Mercedes";
_lstCar.Add(cobj);
}

MVC ViewModels and Entity Framework queries

I am new to both MVC and Entity Framework and I have a question about the right/preferred way to do this.
I have sort of been following the Nerd Dinner MVC application for how I am writing this application. I have a page that has data from a few different places. It shows details that come from a few different tables and also has a dropdown list from a lookup table.
I created a ViewModel class that contains all of this information:
class DetailsViewModel {
public List<Foo> DropdownListData { get; set; }
// comes from table 1
public string Property1 { get; set; }
public string Property2 { get; set; }
public Bar SomeBarObject { get; set; } // comes from table 2
}
In the Nerd Dinner code, their examples is a little too simplistic. The DinnerFormViewModel takes in a single entity: Dinner. Based on the Dinner it creates a SelectList for the countries based on the dinners location.
Because of the simplicity, their data access code is also pretty simple. He has a simple DinnerRepository with a method called GetDinner(). In his action methods he can do simple things like:
Dinner dinner = new Dinner();
// return the view model
return View(new DinnerFormViewModel(dinner));
OR
Dinner dinner = repository.GetDinner(id);
return View(new DinnerFormViewModel(dinner));
My query is a lot more complex than this, pulling from multiple tables...creating an anonymous type:
var query = from a in ctx.Table1
where a.Id == id
select new { a.Property1, a.Property2, a.Foo, a.Bar };
My question is as follows:
What should my repository class look like? Should the repository class return the ViewModel itself? That doesn't seem like the right way to do things, since the ViewModel sort of implies it is being used in a view. Since my query is returning an anonymous object, how do I return that from my repository so I can construct the ViewModel in my controller actions?
While most of the answers are good, I think they are missing an in-between lines part of your question.
First of all, there is no 100% right way to go about it, and I wouldn't get too hung up on the details of the exact pattern to use yet. As your application gets more and more developped you will start seeing what's working and what's not, and figure out how to best change it to work for you and your application. I just got done completely changing the pattern of my Asp.Net MVC backend, mostly because a lot of advice I found wasn't working for what I was trying to do.
That being said, look at your layers by what they are supposed to do. The repository layer is solely meant for adding/removing/and editing data from your data source. It doesn't know how that data is going to be used, and frankly it doesn't care. Therefore, repositories should just return your EF entities.
The part of your question that other seem to be missing is that you need an additional layer in between your controllers and the repositories, usually called the service layer or business layer. This layer contains various classes (however you want to organize them) that get called by controllers. Each of these classes will call the repository to retrieve the desired data, and then convert them into the view models that your controllers will end up using.
This service/business layer is where your business logic goes (and if you think about it, converting an entity into a view model is business logic, as it's defining how your application is actually going to use that data). This means that you don't have to call specific conversion methods or anything. The idea is you tell your service/business layer what you want to do, and it gives you business entities (view models) back, with your controllers having no knowledge of the actual database structure or how the data was retrieved.
The service layer should be the only layer that calls repository classes as well.
You are correct a repository should not return a view model. As changes to your view will cause you to change your data layer.
Your repository should be an aggregate root. If your property1, property2, Foo, Bar are related in some way I would extract a new class to handle this.
public class FooBarDetails
{
public string Property1 {get;set;}
public string Property2 {get;set;}
public Foo Foo {get;set;}
public Bar Bar {get;set;}
}
var details = _repo.GetDetails(detailId);
If Foo and Bar are not related at all it might be an option to introduce a service to compose your FooBarDetails.
FooBarDetails details = _service.GetFooBar(id);
where GetFooBar(int) would look something like this:
_fooRepo.Get(id);
_barRepo.Get(id);
return new FooBarDetails{Foo = foo, Bar = bar, Property1 = "something", Property2 = "something else"};
This all is conjecture since the design of the repository really depends on your domain. Using generic terms makes it hard to develop potential relationships between your objects.
Updated
From the comment if we are dealing with an aggregate root of an Order. An order would have the OrderItem and also the customer that placed the order.
public class Order
{
public List<OrderItem> Items{get; private set;}
public Customer OrderedBy {get; private set;}
//Other stuff
}
public class Customer
{
public List<Orders> Orders{get;set;}
}
Your repo should return a fully hydrated order object.
var order = _rep.Get(orderId);
Since your order has all the information needed I would pass the order directly to the view model.
public class OrderDetailsViewModel
{
public Order Order {get;set;}
public OrderDetailsViewModel(Order order)
{
Order = order;
}
}
Now having a viewmodel with only one item might seem overkill (and it most likely will be at first). If you need to display more items on your view it starts to help.
public class OrderDetailsViewModel
{
public Order Order {get;set;}
public List<Order> SimilarOrders {get;set;}
public OrderDetailsViewModel(Order order, List<Order> similarOrders)
{
Order = order;
SimilarOrders = similarOrders;
}
}
Repository should work only with models not anonymous types and it should only implement CRUD operations. If you need some filtering you can add a service layer for that.
For mapping between ViewModels and Models you can use any of mapping libraries, such as Automapper.
The current answers are very good. I would just point out that you are abusing anonymous types; they should only be used for intermediate transport steps, and never passed to other places in your code (e.g. view model constructors).
My approach would be to inject the view model with all the relevant model classes. E.g. an action method might look like:
var dinner = dinnerRepository.Get(dinnerId);
var bar = barRepository.Get(barId);
var viewModel = new DinnerAndBarFormViewModel(dinner, bar);
return View(viewModel);
I have the same doubt of the poster and I am still not convinced. I personally do not like very much the given advice of limiting the repository to just executing basic CRUD operations. IMHO, performances should always be kept into account when developing a real application, and substituting a SQL outer join with two different queries for master-detail relationships doesn't sound too good to me.
Also, this way the principle that only needed fields should be queried is completely lost: using this approach, we are forced to always retrieve all the fields of all the tables involved, which is simply crazy in non-toy applications!

Categories