CRUD over aggregate root using traditional ado.net - c#

Can anyone show me simple CRUD statements for aggregate root using traditional ado.net?
Thanks in advance!

(This is written on the assumption that a GUID or some non-database generated primary key is used)
Also alot of boiler code such as connection management etc... should be moved to a base class for Repository.
If Order is the aggregate root, one possibly should make OrderLineRepo private to the assembly
public class OrderRepository : Repository
{
public void Save(Order order)
{
if(order.IsDirty)
{
//sets up connection if required, command and sql
ICommand command = BuildCommandForSave(order);
command.Execute();
OrderLineRepository orderLineRepo = GetOrderLineRepo();
foreach(OrderLine line in order.OrderLines)
{
orderLineRepo.Save(line);
}
}
}
}
However I'd stress that this is really a simple naive implementation, and that I'd personally utilize an ORM like nHibernate for my persistence if doing DDD as the requirements for a good well tested persistence layer are non-trivial
Also this assumes that the IsDirty function takes children into account - we would also require a means to see if the order is new/edited, not just dirty

Related

Which class should be responsible for creating ID for entity?

I'm struggling a little bit with following problem. Let's say I want to manage dependencies in my project, so my domain won't depend on any external stuff - in this problem on repository. In this example let's say my domain is in project.Domain.
To do so I declared interface for my repository in project.Domain, which I implement in project.Infrastructure. Reading DDD Red Book by Vernon I noticed, that he suggests that method for creating new ID for aggregate should be placed in repository like:
public class EntityRepository
{
public EntityId NextIdentity()
{
// create new instance of EntityId
}
}
Inside this EntityId object would be GUID but I want to explicitly model my ID, so that's why I'm not using plain GUIDs. I also know I could skip this problem completely and generate GUID on the database side, but for sake of this argument let's assume that I really want to generate it inside my application.
Right now I'm just thinking - are there any specific reasons for this method to be placed inside repository like Vernon suggests or I could implement identity creation for example inside entity itself like
public class Entity
{
public static EntityId NextIdentity()
{
// create new instance of EntityId
}
}
You could place it in the repository as Vernon says, but another idea would be to place a factory inside the constructor of your base entity that creates the identifier. In this way you have identifiers before you even interact with repositories and you could define implementation per your ID generation strategy. Repository could include a connection to something, like a web service or a database which can be costly and unavailable.
There are good strategies (especially with GUID) that allow good handling of identifiers. This also makes your application fully independent of the outside world.
This also enables you to have different identifier types throughout your application if the need arises.
For eg.
public abstract class Entity<TKey>
{
public TKey Id { get; }
protected Entity() { }
protected Entity(IIdentityFactory<TKey> identityFactory)
{
if (identityFactory == null)
throw new ArgumentNullException(nameof(identityFactory));
Id = identityFactory.CreateIdentity();
}
}
Yes, you could bypass the call to the repository and just generate the identity on the Entity. The problem, however, is that you've broken the core idea behind the repository: keeping everything related to entity storage isolated from the entity itself.
I would say keep the NextIdentity method in the respository, and still use it, even if you are only generating the GUID's client-side. The benefit is that in some future where you want to change how the identity's are being seeded, you can support that through the repository. Whereas, if you go with the approach directly on the Entity, then you would have to refactor later to support such a change.
Also, consider scenarios where you would use different repositories in such cases like testing. ie. you might want to generate two identities with the same ID and perform clash testing or "does this fail properly". Having a repository handle the generation gives you opportunity to get creative in such ways, without making completely unique test cases that don't mimic what actual production calls would occur.
TLDR; Keep it in the repository, even if your identifier can be client-side generated.

Entity framework 6 easiest way to denormalize column to avoid frequent joins

Let's assume, I have two entities.
class Author{
public int Id{get;set;}
public string Name{get;set;}
//.....
}
class Article{
public int Id{get;set;}
public int AuthorId{get;set;}
public string Text{get;set;}
}
Now I want to add to Article AuthorName property doubling existing Author.Name to simplify resulting linq queries and execution time. I'm sure that my database will be used by only one Asp.Net MVC project. What is common way to implement such a column using EF (without database triggers)?
Also here can be a bit more difficult case. Let's say I want to have TotalWordCountInAllArticles column in Author entity which calculated by Text property of Article.
You can add the AuthorName property to Article and just manually keep the integrity by making sure that any code that creates Articles or updates the Author.Name also updates all of the Articles. Same thing with TotalWordCount, and time the Article.Text changes, re-add up all of the counts from the other Articles.
There are a few patterns you could look at to make this more automatic, such as a Domain Event pattern (https://lostechies.com/jimmybogard/2014/05/13/a-better-domain-events-pattern/), but it definitely isn't just plug and play. Really depends on if these are just a couple of items or if this is going to happen frequently.
If you are frequently denormalizing data for performance, you may want to look at more of an architecture where there is a normalized DB and then a separate process which generates denormalized views on the data and put into a document store.
NOTE: This might not answer the EF part of your question but it does offer an alternative solution to your problem.
Not sure how far along you are into the development of your project but you may want to consider having a look at Drapper which would make this trivial, fast and offer a number of other benefits.
Let's assume a small change to your Article model to include the Author model.
public class Article
{
public int ArticleId { get; set; }
public string Text { get; set; }
// using Author model
public Author Author { get; set; }
}
And assuming that the SQL you'd expect to execute would be something conceptually similar to:
select article.[Id]
,article.[Text]
,article.[AuthorId]
,author.Name
from [Article] article
join [Author] author on author.AuthorId = article.AuthorId;
Implementing a repository to retrieve them with Drapper would be really trivial. It might look something like:
public class ArticleRepository : IArticleRepository
{
// IDbCommander is a Drapper construct
private readonly IDbCommander _commander;
/// <summary>
/// Initializes a new instance of the <see cref="ArticleRepository"/> class,
/// injecting an instance of the IDbCommander using your IoC framework of
/// choice.
/// </summary>
public ArticleRepository(IDbCommander commander)
{
_commander = commander;
}
/// <summary>
/// Retrieves all article instances.
/// </summary>
public IEnumerable<Article> RetrieveAll()
{
// pass the query method a reference to a
// mapping function (Func<T1, T2, TResult>)
// although you *could* pass the predicate
// in right here, the code is more readable
// when it's separated out.
return _commander.Query(Map.AuthorToArticle);
}
private static class Map
{
// simple mapping function which allows you
// to map out exactly what you want, exactly
// how you want it. no hoop jumping!
internal static Func<Article, Author, Article>
AuthorToArticle = (article, author) =>
{
article.Author = author;
return article;
};
}
}
You'd wire the SQL to the repository using the configuration available to Drapper. It supports both json and xml config files or you could configure it all in code as well if you wanted to.
I've thrown a quick sample together for you over on Github.
Why should you consider this?
There's a number of benefits to going this route:
You indicated a performance concern (execution time). Drapper is an abstraction layer built on top of Dapper - the king of high performance micro-ORM's.
You control the mapping of your objects explicitly - no weird semantics or framework quirks (like the one you're facing).
No auto generated SQL. You decide exactly what SQL will be executed.
Your SQL is separated from your C# - if your schema changes (perhaps to improve performance) there's no need to recompile your project, change your entity mapping or alter any of your domain code or repository logic. You simply update the SQL code in your configuration.
Along the same lines, you can design your service/repository layers to be more domain friendly without having to be concerned about data access concerns polluting your service layer (or vice versa).
Fully testable - you can easily mock the results from the IDbCommander.
Less coding - no need for both entities and dto's (unless you want them), no overriding OnModelCreating methods or deriving from DbContext, no special attributes on your POCO's.
And that's just the tip of the iceberg.

Data access architectures with Raven DB

What data access architectures are available that I can use with Raven DB?
Basically, I want to separate persistence via interfaces, so I don't expose underline storage to the upper layers. I.e. I don't want my domain to see IDocumentStore or IDocumentSession which are from Raven DB.
I have implemented the generic repository pattern and that seems to work. However, I am not sure that is actually the correct approach. Maybe I shall go towards command-query segregation or something else?
What are your thoughts?
Personally, I'm not really experienced with the Command Pattern. I saw that it was used in Rob Ashton's excellent tutorial.
For myself, I'm going to try using the following :-
Repository Pattern (as you've done)
Dependency Injection with StructureMap
Moq for mock testing
Service layer for isolating business logic (not sure of the pattern here .. or even if this is a pattern.
So when i wish to get any data from RavenDB (the persistence source), i'll use Services, which will then call the appropriate repository. This way, i'm not exposing the repository to the Application nor is the repository very heavy or complex -> it's basically a FindAll / Save / Delete.
eg.
public SomeController(IUserService userService, ILoggingService loggingService)
{
UserService = userService;
LoggingService = loggingService;
}
public ActionMethod Index()
{
// Find all active users, page 1 and 15 records.
var users = UserService.FindWithIsActive(1, 15);
return View(new IndexViewModel(users));
}
public class UserService : IUserService
{
public UserService(IGenericReposistory<User> userRepository,
ILoggingService loggingService)
{
Repository = userRepository;
LoggingService = loggingService;
}
public IEnumberable<User> FindWithIsActive(int page, int count)
{
// Note: Repository.Find() returns an IQueryable<User> in this case.
// Think of it as a SELECT * FROM User table, if it was an RDMBS.
return Repository.Find()
.WithIsActive()
.Skip(page)
.Take(count)
.ToList();
}
}
So that's a very simple and contrived example with no error/validation checking, try/catch, etc... .. and it's pseudo code .. but you can see how the services are rich while the repository is (suppose to be, for me at least) simple or lighter. And then I only expose any data via services.
That's what I do right now with .NET and Entity Framework and I'm literally hours away from giving this a go with RavenDb (WOOT!)
What are you trying to achieve by that?
You can't build an application which makes use of both an RDBMS and DocDB, not efficiently at least. You have to decide for yourself which database you are going to use, and then go all the way with it. If you decide to go with an RDMBS, you can use NHibernate for example, and then again - no need for any other abstraction layer.

What is the best way to support multiple databases for a .NET product?

We are designing a product which could support multiple databases. We are doing something like this currently so that our code supports MS SQL as well as MySQL:
namespace Handlers
{
public class BaseHandler
{
protected string connectionString;
protected string providerName;
protected BaseHandler()
{
connectionString = ApplicationConstants.DatabaseVariables.GetConnectionString();
providerName = ApplicationConstants.DatabaseVariables.GetProviderName();
}
}
}
namespace Constants
{
internal class ApplicationConstants
{
public class DatabaseVariables
{
public static readonly string SqlServerProvider = "System.Data.SqlClient";
public static readonly string MySqlProvider = "MySql.Data.MySqlClient";
public static string GetConnectionString()
{
return ConfigurationManager.ConnectionStrings["CONNECTION_STRING"].ConnectionString;
}
public static string GetProviderName()
{
return ConfigurationManager.ConnectionStrings["CONNECTION_STRING"].ProviderName;
}
}
}
}
namespace Handlers
{
internal class InfoHandler : BaseHandler
{
public InfoHandler() : base()
{
}
public void Insert(InfoModel infoModel)
{
CommonUtilities commonUtilities = new CommonUtilities();
string cmdInsert = InfoQueryHelper.InsertQuery(providerName);
DbCommand cmd = null;
try
{
DbProviderFactory provider = DbProviderFactories.GetFactory(providerName);
DbConnection con = LicDbConnectionScope.Current.GetOpenConnection(provider, connectionString);
cmd = commonUtilities.GetCommand(provider, con, cmdInsert);
commonUtilities.PrepareCommand(cmd, infoModel.AccessKey, "paramAccessKey", DbType.String, false, provider, providerName);
commonUtilities.PrepareCommand(cmd, infoModel.AccessValue, "paramAccessValue", DbType.String, false, provider, providerName);
cmd.ExecuteNonQuery();
}
catch (SqlException dbException)
{
//-2146232060 for MS SQL Server
//-2147467259 for MY SQL Server
/*Check if Sql server instance is running or not*/
if (dbException.ErrorCode == -2146232060 || dbException.ErrorCode == -2147467259)
{
throw new BusinessException("ER0008");
}
else
{
throw new BusinessException("GENERIC_EXCEPTION_ERROR");
}
}
catch (Exception generalException)
{
throw generalException;
}
finally
{
cmd.Dispose();
}
}
}
}
namespace QueryHelpers
{
internal class InfoQueryHelper
{
public static string InsertQuery(string providerName)
{
if (providerName == ApplicationConstants.DatabaseVariables.SqlServerProvider)
{
return #"INSERT INTO table1
(ACCESS_KEY
,ACCESS_VALUE)
VALUES
(#paramAccessKey
,#paramAccessValue) ";
}
else if (providerName == ApplicationConstants.DatabaseVariables.MySqlProvider)
{
return #"INSERT INTO table1
(ACCESS_KEY
,ACCESS_VALUE)
VALUES
(?paramAccessKey
,?paramAccessValue) ";
}
else
{
return string.Empty;
}
}
}
}
Can you please suggest if there is any better way of doing it? Also what are the pros and cons of the approach?
Whatever you do, don't write your own mapping code. Its already been done before, and its probably been done a million times better than whatever you could write by hand.
Without a doubt, you should use NHibernate. Its an object-relational mapper which makes database access transparent: you define a set of DAL classes which represent each table in your database, and you use the NHibernate providers to perform queries against your database. NHibernate will dynamically generate the SQL required to query the database and populate your DAL objects.
The nice thing about NHibernate is that it generates SQL based on whatever you've specified in the config file. Out of the box, it supports SQL Server, Oracle, MySQL, Firebird, PostGres and a few other databases.
I'd use NHibernate.
Here's nice beginner tutorial
For your current need, I agree with NHibernate...
Just want to point out something with your class hierarchy...
You will better to use Interface
Like (Just check the doc or Internet for exact syntax)
Interface IDBParser
Function1
Function2
class MSSQLParser : IDBParser
Function1
Function2
class MySQLParser : IDBParser
Function1
Function2
Then in your code you can use the interface
Main()
IDBParser dbParser;
if(...)
dbParser = new MSSQLParser();
else
dbParser = new MySQLParser();
SomeFunction( dbParser );
// the parser can be sent by parameter, global setting, central module, ...
SomeFunction( IDBParser dbParser)
dbParser.Function1();
That way it will be easier to manage and your code won't be full of the same if/else condition. It will be also a lot easier to add others DB. Another advantage is that it could help you with unit testing by sending mock object.
If you have to code it yourself and not use a product which provides unified access, remember that objects like SqlDataAdapter and OracleDataAdapter inherit from the common DbDataAdapter (at least in later versions of the runtime). If you cast down to DbDataAdapter, you can write code that will work with both databases in the places where you would do the same for both databases. Some of your code will look a little like this:
DbDataAdapter adapter = GetOracleDataAdapter() as DbDataAdapter;
Once you've casted down, it doesn't matter if it is a SqlDataAdapter or a OracleDataAdapter. You can call it the same way.
However, remember that coding for two databases means using features that exist only within both while having to work around the shortcomings of both. It's not really a good idea.
If you need a mapping from database entries to Objects I suggest you go with the solution other already suggested: NHibernate.
If this seems like overkill for your application and you want to go with the Ado.net approach and do not need a O/RM-soultion, you should have a look on what the Spring.net guys did and learn about the Ado.Net Provider Abstraction.
There are object-relational mapping layers out there that will support multiple database technologies, like Entity Spaces.
What's always good in such cases is to create a layered architecture, where all the DB related stuff is JUST in the data access layer. Then you could have different implementations of your DAO layer, one for Oracle, SQL Server etc...
You should separate the business layer from the DAO layer with interfaces, such that your Business layer just uses them to access the DAO layer. So you could perfectly exchange the underlaying implementation of the DAO layer to run on an Oracle DB or whatever system you like.
Another good suggestion is to take a look at object-relational mappers like Scott already suggested. I'd take a look at NHibernate or Entity framework.
Many people have suggested an O/R mapping framework such as NHibernate. This is quite a reasonable approach unless you don't want to use an O/R mapper for some reason. Something like NHibernate will probably get you 95%+ of the way but you may need to write some custom SQL. Don't panic if this is the case; you can still do an ad-hoc solution for the rest.
In this case, take the bits that do need the custom SQL for and separate them out into a platform specific plugin module. Write Oracle, MySQL, SQL Server (etc.) plugins as necessary for the individual database platforms you want to support.
ADO.Net makes it fairly easy to wrap sprocs, so you might be able to move the platform dependent layer down into some stored procedures, presenting a more-or-less consitent API to the middle tier. There are still some platform dependencies (such as the '#' prefix on SQL Server variable names), so you would need to make a generic sproc wrapper mechanism (which is not all that hard).
With any luck, the specific operations you need to break out in this manner will be fairly small in number so the amount of work to maintain the plugins will be limited.
One approach to this problem is to design your application to work entirely with disconnected DataSets, and write a data access component that handles fetching data from the different database brands you'll be supporting, as well as persisting changes made to the DataSets by your application back to the original databases.
Pros: DataSets in .Net are well-written, easy-to-use and powerful, and do an excellent job of providing methods and tools for working with table-based data.
Cons: This method can be problematic if your application needs to work with extremely large sets of data on the client side.
Right now, Microsoft's Entity Framework has a few shortcommings, some of them that can be deal breakers, depending on the application's intended architecture.
From what I've seen and read about V2, which will be shipped with .Net 4, I think it will certainly deserve to be looked at.

Problem using LINQ to SQL with one DataContext per atomic action

I have started using Linq to SQL in a (bit DDD like) system which looks (overly simplified) like this:
public class SomeEntity // Imagine this is a fully mapped linq2sql class.
{
public Guid SomeEntityId { get; set; }
public AnotherEntity Relation { get; set; }
}
public class AnotherEntity // Imagine this is a fully mapped linq2sql class.
{
public Guid AnotherEntityId { get; set; }
}
public interface IRepository<TId, TEntity>
{
Entity Get(TId id);
}
public class SomeEntityRepository : IRepository<Guid, SomeEntity>
{
public SomeEntity Get(Guid id)
{
SomeEntity someEntity = null;
using (DataContext context = new DataContext())
{
someEntity = (
from e in context.SomeEntity
where e.SomeEntityId == id
select e).SingleOrDefault<SomeEntity>();
}
return someEntity;
}
}
Now, I got a problem. When I try to use SomeEntityRepository like this
public static class Program
{
public static void Main(string[] args)
{
IRepository<Guid, SomeEntity> someEntityRepository = new SomeEntityRepository();
SomeEntity someEntity = someEntityRepository.Get(new Guid("98011F24-6A3D-4f42-8567-4BEF07117F59"));
Console.WriteLine(someEntity.SomeEntityId);
Console.WriteLine(someEntity.Relation.AnotherEntityId);
}
}
everything works nicely until the program gets to the last WriteLine, because it throws an ObjectDisposedException, because the DataContext does not exist any more.
I do see the actual problem, but how do I solve this? I guess there are several solutions, but none of those I have thought of to date would be good in my situation.
Get away from the repository pattern and using a new DataContext for each atomic part of work.
I really would not want to do this. A reason is that I do not want to be the applications to be aware of the repository. Another one is that I do not think making linq2sql stuff COM visible would be good.
Also, I think that doing context.SubmitChanges() would probably commit much more than I intended to.
Specifying DataLoadOptions to fetch related elements.
As I want my Business Logic Layer to just reply with some entities in some cases, I do not know which sub-properties they need to use.
Disabling lazy loading/delayed loading for all properties.
Not an option, because there are quite a few tables and they are heavily linked. This could cause a lot of unnecessary traffic and database load.
Some post on the internet said that using .Single() should help.
Apparently it does not ...
Is there any way to solve this misery?
BTW: We decided to use Linq t0 SQL because it is a relatively lightweight ORM solution and included with the .NET framework and Visual Studio. If the .NET Entity Framework would fit better in this pattern, it may be an option to switch to it. (We are not that far in the implementation, yet.)
Rick Strahl has a nice article about DataContext lifecycle management here: http://www.west-wind.com/weblog/posts/246222.aspx.
Basically, the atomic action approach is nice in theory but you're going to need to keep your DataContext around to be able to track changes (and fetch children) in your data objects.
See also: Multiple/single instance of Linq to SQL DataContext and LINQ to SQL - where does your DataContext live?.
You have to either:
1) Leave the context open because you haven't fully decided what data will be used yet (aka, Lazy Loading).
or 2) Pull more data on the initial load if you know you will need that other property.
Explaination of the latter: here
I'm not sure you have to abandon Repository if you go with atomic units of work. I use both, though I admit to throwing out the optimistic concurrency checks since they don't work out in layers anyway (without using a timestamp or some other required convention). What I end up with is a repository that uses a DataContext and throws it away when it's done.
This is part of an unrelated Silverlight example, but the first three parts show how I'm using a Repository pattern with a throwaway LINQ to SQL context, FWIW: http://www.dimebrain.com/2008/09/linq-wcf-silver.html
Specifying DataLoadOptions to fetch related elements. As I want my Business Logic Layer to just reply with some entities in some cases, I do not know which sub-properties they need to use.
If the caller is granted the coupling necessary to use the .Relation property, then the caller might as well specify the DataLoadOptions.
DataLoadOptions loadOptions = new DataLoadOptions();
loadOptions.LoadWith<Entity>(e => e.Relation);
SomeEntity someEntity = someEntityRepository
.Get(new Guid("98011F24-6A3D-4f42-8567-4BEF07117F59"),
loadOptions);
//
using (DataContext context = new DataContext())
{
context.LoadOptions = loadOptions;
This is what I do, and so far it's worked really well.
1) Make the DataContext a member variable in your repository. Yes, this means you're repository should now implement IDisposable and not be left open... maybe something you want to avoid having to do, but I haven't found it to be inconvenient.
2) Add some methods to your repository like this:
public SomeEntityRepository WithSomethingElseTheCallerMightNeed()
{
dlo.LoadWith<SomeEntity>(se => se.RelatedEntities);
return this; //so you can do method chaining
}
Then, your caller looks like this:
SomeEntity someEntity = someEntityRepository.WithSomethingElseTheCallerMightNeed().Get(new Guid("98011F24-6A3D-4f42-8567-4BEF07117F59"));
You just need to make sure that when your repository hits the db, it uses the data load options specified in those helper methods... in my case "dlo" is kept as a member variable, and then set right before hitting the db.

Categories