I need to build a Data Access Library to be used from many small applications afterwards.
It will heavily use the DataReader objects. The tables may exist with same structure either in SQL Servers or in DB2/400. This means that a method for example
GetItemsByWarehouse()
Must be able to run either against SQL Server DB or DB2. Where it will run depends on the server availability and user selection.
What i plan to do (and need advice on it) is :
Implement the DAL based on Singleton design Pattern to ensure that i will have only one instance of my Library.
Have a property that will set the connection string.
Have a property that will set if the target server is AS400 or SQL.
I dont know if this course of action is correct. Should i implement point #3 or i could get the type from the connection string?
Also How i should implement such a method as above? check the property and decide inside the method if i will use Sqlconnection or OleDbConnection e.t.c?
I paste this code from my micro Orm . There are multiple overloads for the constructor to specify what Db you want used.
public class DbAccess : IDisposable
{
public DbAccess()
{
var cnx=ConfigurationManager.ConnectionStrings[0];
if (cnx==null) throw new InvalidOperationException("I need a connection!!!");
Init(cnx.ConnectionString,ProviderFactory.GetProviderByName(cnx.ProviderName));
}
public DbAccess(string connectionStringName)
{
var cnx = ConfigurationManager.ConnectionStrings[connectionStringName];
if (cnx == null) throw new InvalidOperationException("I need a connection!!!");
Init(cnx.ConnectionString, ProviderFactory.GetProviderByName(cnx.ProviderName));
}
public DbAccess(string cnxString,string provider)
{
Init(cnxString,ProviderFactory.GetProviderByName(provider));
}
public DbAccess(string cnxString,DBType provider)
{
Init(cnxString,ProviderFactory.GetProvider(provider));
}
public DbAccess(string cnxString,IHaveDbProvider provider)
{
Init(cnxString, provider);
} //other stuff
}
Note that the DAO (DbAccess) doesn't care about the concrete provider.
Here's how the ProviderFactory looks. Here you can add a method to detect the db and to return a provider.
internal static class ProviderFactory
{
public static IHaveDbProvider GetProviderByName(string providerName)
{
switch (providerName)
{
case SqlServerProvider.ProviderName:return new SqlServerProvider();
case MySqlProvider.ProviderName:return new MySqlProvider();
case PostgresProvider.ProviderName:return new PostgresProvider();
case OracleProvider.ProviderName:return new OracleProvider();
case SqlServerCEProvider.ProviderName:return new SqlServerCEProvider();
case SqliteProvider.ProviderName:return new SqliteProvider();
}
throw new Exception("Unkown provider");
}
public static IHaveDbProvider GetProvider(DBType type)
{
switch (type)
{
case DBType.SqlServer: return new SqlServerProvider();
case DBType.SqlServerCE: return new SqlServerCEProvider();
case DBType.MySql: return new MySqlProvider();
case DBType.PostgreSQL:return new PostgresProvider();
case DBType.Oracle:return new OracleProvider();
case DBType.SQLite:return new SqliteProvider();
}
throw new Exception("Unkown provider");
}
}
For more code snippets and inspiration you can check the Github repo
I would advice against the Singleton pattern, it's much better to let a DI container to manage the instance life. Also, the app should use the interface of the DAO not the concrete instance (this will help you in the future).
Take a look at Abstract Factory Pattern
You can have an interface with the DAL contracts and an implementations for each context. Using a Factory it can decide which implementation will use in each case. The factory will need the "switch rule" to decide what to use.
Related
I'm tasked with implementing a Business Object / Data Access Layer for a project and have to expect thousands of users concurrently.
I've always used singletons to manage the DAL but I never game too much thought about how it would behave with so many multiple users at the same time, so I'd like to ask the proper use for it.
I have:
public class UserDAL
{
private static UserDAL _userDAL = null;
//Private constructor
private UserDAL() { }
public static UserDAL GetInstance()
{
if(_userDAL == null)
{ _userDAL = new UserDAL(); }
return _userDAL;
}
//Example of a method
public User GetUsers()
{
IDataReader dataReader = ConnectionFactory.GetConnection().ExecuteSomeQuery("queryHere");
}
}
For my Connection Factory I don't think it's a problem, although I did read that it's best to leave the connection pooling to ADO.NET itself:
public sealed class ConnectionFactory
{
private static string _connectionString = ConfigurationManager.ConnectionStrings["ConnectionName"].ConnectionString;
//My connection interface
private static IConnection _connection = null;
public static IConnection GetConnection()
{
if(_connection == null)
{
//some checks to determine the type
_connection = new SQLConnection(_connectionString);
}
return _connection;
}
}
I'm also using the singleton pattern in the BO, although I don't think it's necessary:
public class UserBO
{
private static UserBO _userBO = null;
private static UserDAL _userDAL = null;
private UserBO() { }
public static UserBO GetInstance()
{
if(_userBO == null)
{
_userBO = new UserBO();
_userDAL = UserDAL.GetInstance();
}
return _userDAL;
}
//Example of a method
public User GetUser()
{
//Rules
return _userDAL.GetUsers();
//return UserDAL.GetInstance().GetUsers(); //or this
}
}
I'm doing it like this just so I can call in the UI/Presentation layer:
User someUser = UserBO.GetInstance().GetUser(1);
This worked for me for the applications I've made so far, but I'm guessing it's because there wasn't too many users simultaneously.
I'm worried about what would happen in the UserDAL instance when a second user requests something but there's already a 1st user doing some heavy operation in it.
Should I drop this pattern in the BO/DAL layer and leave it only in the ConnectionFactory? Are there any issues which I should expect if I use this?
I would definitely drop it altogether, especially for the Connection: the connectionFactory could be static, but return a new connection each time it is asked: ADO.NET is very good at managing connection pooling and you just need to get out of it's way.
In anything which has changeable state keep away from singletons. This includes ADO.NET connections, and your actual Business Objects. Having one user mutate the state of an object that is being used by another user can lead to all sorts of strange bugs: in a web site, you basically have a massively multithreaded application and changeable singletons are very bad news!
You do need to come up with some sort of locking strategy, though, for when two or more users change copies of the same business object. A valid strategy includes saying 'Actually, this isn't going to be a problem so I'll ignore it' - but only if you have thought about it. The two basic strategies are Optimistic and Pessimistic Locking.
Optimistic Locking means that you optimistically think mostly the users won't change the same things (for whatever reason) and so you don't put Database locks on read data. This is the only possibility on a Web Site
Pessimistic locking says all possibly changed data will, when read, have DB Locks applied until the user is finished with it. This means keeping a Transaction open, and it's not practical for a Web Site.
Optimistic Locking can be implemented by creating Update Statements which update a row only where all columns which haven't been changed by the current user also haven't been changed in the database; if they have, someone else has changed the same row. Alternatively, you can add a column to all tables - version int not null - and update where the version hasn't changed since you read the object; you also increment the version number in every update.
If either method fails, you need to reread the now-current data and get your user to confirm or re-apply their changes. Bit of a pain but can be necessary.
I would advise you to move away from the Singleton pattern for testability: Dependency Injection & Singleton Design pattern
Instead, take a look at Dependency Injection. Ninject is a good way to start.
DI will take care of wiring the BO and DAL together:
public interface IUserRepository
{
IEnumerable<User> GetUsers();
}
public class UserBO
{
private readonly IUserRepository _userRepository;
public UserBO(IUserRepository userRepository){
_userRepository = userRepository;
}
public IEnumerable<User> GetUsers()
{
return _userRepository.GetUsers();
}
}
As for reusing the Connection Pool: Should you reuse SqlConnection, SqlDataAdapter, and SqlCommand objects?
How would you go about registering diferent IDbConnectionFactory instances in Funq and then access them directly within your services? Do named instances somehow come into play here?
Is this the best approach to take when using different databases across services?
Thanks!
EDIT:
An example ;). I could be way off here because I'm pretty new to IoC, but say for example I have 2 separate database connections that I'd like to inject. In ServiceStack, this is done in the Global.asax.
container.Register<IDbConnectionFactory>(c =>
new OrmLiteConnectionFactory(#"Connection String 1", SqlServerOrmLiteDialectProvider.Instance));
container.Register<IDbConnectionFactory>(c =>
new OrmLiteConnectionFactory(#"Connection String 2", SqlServerOrmLiteDialectProvider.Instance));
Both of these seem to be injected honky dory.
These are then accessed automatically on the service end via something like this:
public IDbConnectionFactory DbFactory { get; set; }
In this case, it seems to be giving me the first one registered. How can I get access to a specific one on the service end? Hopefully that makes it a little more clear.
Here's a full fledged example from ServiceStack.Examples that only uses 1 IDbConnectionFactory:
Movies Rest
My question above is still valid, but the following might help you anyway.
Funq does not support automatic constructor injection (a.k.a. auto wiring), and you will have to do this by hand by constructing Func<T> lambda expressions. Because you are already doing constructor injection by hand, it is easy to choose what IDbConnectionFactory you wish to inject into your services. Example:
IDbConnectionFactory yellowDbConFactory =
new YellowDbConnectionFactory();
IDbConnectionFactory blueDbConFactory =
new BlueDbConnectionFactory();
IDbConnectionFactory purpleDbConFactory =
new PurpleDbConnectionFactory();
container.Register<IService1>(c =>
new Service1Impl(yellowDbConFactory,
c.Resolve<IDep1>());
container.Register<IService2>(c =>
new Service2Impl(blueDbConFactory);
container.Register<IService3>(c =>
new Service3Impl(purpleDbConFactory,
c.Resolve<IDep2>());
Of course you can also used named registrations, like this:
container.Register<IDbConnectionFactory>("yellow",
new YellowDbConnectionFactory());
container.Register<IDbConnectionFactory>("blue",
new BlueDbConnectionFactory());
container.Register<IDbConnectionFactory>("purple",
new PurpleDbConnectionFactory());
container.Register<IService1>(c =>
new Service1Impl(
c.Resolve<IDbConnectionFactory>("yellow"),
c.Resolve<IDep1>());
container.Register<IService2>(c =>
new Service2Impl(
c.Resolve<IDbConnectionFactory>("blue"));
container.Register<IService3>(c =>
new Service3Impl(
c.Resolve<IDbConnectionFactory>("purple"),
c.Resolve<IDep2>());
Because of the lack of support for auto-wiring, you'll end up with these rather awkward registrations, and this will pretty soon result in a maintenance nightmare of your composition root, but that's unrelated to your question ;-)
You should usually try to prevent ambiguity in your registration. In your case you've got a single interface, that does two things (connects to two databases). Unless both database share the exact same model, each database deserves its own interface (if the two implementations are not interchangable, you'll be violating the Liskov substitution principle):
interface IYellowDbConnectionFactory : IDbConnectionFactory
{
}
interface IPurpleDbConnectionFactory : IDbConnectionFactory
{
}
Because of the way ServiceStack works, you probably need to implement an implementation for each:
class YellowDbConnectionFactory : OrmLiteConnectionFactory,
IYellowDbConnectionFactory
{
public YellowDbConnectionFactory(string s) : base(s){}
}
class PurpleDbConnectionFactory : OrmLiteConnectionFactory,
IPurpleDbConnectionFactory
{
public YellowDbConnectionFactory(string s) : base(s){}
}
Now you should change the definition of your services to use the specific interface instead of using the IDbConnectionFactory:
public class MovieService : RestServiceBase<Movie>
{
private readonly IYellowDbConnectionFactory dbFactory;
public MovieService(IYellowDbConnectionFactory factory)
{
this.dbFactory = factory;
}
}
Note that this class now uses constructor injection instead of property injection. You can get this to work with property injection, but it is usually better to go with constructor injection. Here is a SO question about it.
With Funq, your configuration will then look like this:
container.Register<MovieService>(c =>
new MovieService(
c.Resolve<IYellowDbConnectionFactory>());
Those two new interfaces and two classes and change to the MovieService didn't win you a lot, because Funq doesn't support auto-wiring. You will be the one who is wiring everything together manually. However, when you switch to a framework that does support auto-wiring, this design allows the container to inject the right dependencies without a problem, because there is no discussion about what to inject.
Although Funq doesn't support Auto wiring, ServiceStack implementation of it does. The latest version of ServiceStack includes the Funq.Container overloads:
container.RegisterAutoWired<T>();
container.RegisterAutoWiredAs<T,TAs>();
container.RegisterAs<T,TAs>();
So in Steven's example you can also do:
container.RegisterAs<YellowDbConnectionFactory,IYellowDbConnectionFactory>();
And it will automatically register the dependencies for you.
Thought I'd chip in my 2 cents here, though I realise the question is pretty old. I wanted to access a transactional DB and a logging DB from ServiceStack and this is how I ended up doing it from the AppHostBase Configure() method:
container.Register<IDbConnectionFactory>(
c => {
OrmLiteConnectionFactory dbFactory = new OrmLiteConnectionFactory(ConfigurationManager.ConnectionStrings["MyTransactionalDB"].ConnectionString, MySqlDialect.Provider);
dbFactory.ConnectionFilter = x => new ProfiledDbConnection(x, Profiler.Current);
dbFactory.RegisterConnection("LoggingDB", ConfigurationManager.ConnectionStrings["MyLoggingDB"].ConnectionString, MySqlDialect.Provider);
return dbFactory;
});
By default, the "MyTransactionalDB" is used when opening a connection from the factory, but I can explicitly access the logging DB from a service via:
using (var db = DbFactory.Open("LoggingDB"))
{
db.Save(...);
}
Try using the Repository pattern instead of this IoC (which just complicates things unnecessarily). The code above seems not to work. Suspect something has changed. I'm still unclear as to how registering an IDbConnectionFactory magically populates the IDbConnection property. Would love some explanation around this.
If someone ever does get this working using the ServiceStack IoC container.. then I'd love to see how. And it would be hugely benefitial to update the SS docs (I'm quite happy to do it)
You can also use a dictionnary
Create a enum with your database Key Name
public enum Database
{
Red,
Blue
}
In Startup.cs, create a dictionary of function that open a new SqlConnection, then inject the dependency dictionary as Singleton
Dictionary<Database, Func<IDbConnection>> connectionFactory = new()
{
{ Database.Red, () => new SqlConnection(Configuration.GetConnectionString("RedDatabase")) },
{ Database.Blue, () => new SqlConnection(Configuration.GetConnectionString("BlueDatabase")) }
};
services.AddSingleton(connectionFactory);
After you can get the instance od the dependency on object constructor like so:
public class ObjectQueries
{
private readonly IDbConnection _redConnection;
private readonly IDbConnection _blueConnection;
public ObjectQueries(Dictionary<Database, Func<IDbConnection>> connectionFactory)
{
_redConnection = connectionFactory[Database.Red]();
_blueConnection = connectionFactory[Database.Blue]();
}
}
It's clean and readable ;)
In previous question folks helped me to solve repository lifetime problem, now there's a question how to make it work nicely in composite service.
let's say i have services:
public class OrderService : IOrderService
{
IRepository<Order> orderRepository;
public OrderService(IRepositoryFactory repositoryFactory)
{
orderRepository = repositoryFactory.GetRepository<Order>();
}
public void CreateOrder(OrderData orderData)
{
...
orderRepository.SubmitChanges();
}
}
public class ReservationService : IReservationService
{
IRepository<Reservation> reservationRepository;
public ReservationService(IRepositoryFactory repositoryFactory)
{
reservationRepository = repositoryFactory.GetRepository<Reservation>();
}
public void MakeReservations(OrderData orderData)
{
...
reservationService.SubmitChanges();
}
}
And now the intersting part - composition service:
public class CompositionService : ICompositionService {
IOrderService orderService;
IReservationService reservationService;
public CompositionService(IOrderService orderService, IReservationService reservationService)
{
this.orderService = orderService;
this.reservationService = reservationService;
}
public void CreateOrderAndMakeReservations(OrderData orderData)
{
using (var ts = new TransactionScope())
{
orderService.CreateOrder(orderData);
reservationService.MakeReservations(orderData);
ts.Complete();
}
}
}
Problem is, that it won't work correctly if IRepositoryFactory lifestyle is transient (because you would get two different datacontexts and that would require distributed transactions to be enabled, which we try to avoid). Any ides how to write this correctly?
My observations:
In general, factories should be singletons. If your factory isn't a singleton, then you are probably just hiding another factory behind it.
Factories are meant for creating objects on demand. Your code simply creates a repository in the constructor, so I don't really see the difference between that and simply making the repository a direct injection parameter in the constructor.
These all seem to me like a workarounds around a more fundamental problem (described in your first question) and these workarounds only make the problem more complicated. Unless you solve the root problem you will end up with a complex dependency schema and a smelly code.
IMO - this is a Distributed Transaction scenario.
In the example you mentioned, OrderService & ReservationService use the same data context is an implementation detail hidden in the code.
I don't think it is correct to pass this knowledge up to the CompositionService by wrapping the service calls in a TransactionScope as now the composition service is aware of the shared data context & so needs to use a TransactionScope to run the code correctly.
In my opinion, the composition service code should look like:
try{
if(orderService.TryCreateOrder(orderData)){
if(reservationService.TryMakeReservation(orderData)){
reservationService.Commit();
orderService.Commit();
}
else{
orderService.TryRollbackOrder(orderData);
throw new ReservationCouldNotBeMadeException();
}
}
else{
throw new OrderCouldNotBeCreatedException();
}
}
catch(CouldNotRollbackOrderServiceException){
// do something here...
}
catch(CouldNotCommitServiceException){
// do something here...
}
In this case, the OrderService.TryCreateOrder method will insert an Order with a PendingReservation status or some other relevant status which indicates that the Order is inserted, but not completed. This state will change on the commits are called on the services (UnitOfWork pattern?)
In this case, the implementation details of the services are completely hidden from the consumer of the service, while composition is also possible, independent on the underlying implementation detail.
HTH.
Currently in code i have used an object factory to return me a processor based of a string tag, which has severed its purpose up until now.
using Core;
using Data;
public static class TagProcessorFactory
{
public static ITagProcessor GetProcessor(string tag)
{
switch (tag)
{
case "gps0":
return new GpsTagProcessor();
case "analog_manager":
return new AnalogManagerTagProcessor();
case "input_manager":
return new InputManagerTagProcessor();
case "j1939":
return new J1939TagProcessor(new MemcachedProvider(new[] { "localhost" }, "DigiGateway"), new PgnRepository());
default:
return new UnknownTagProcessor();
}
}
}
Calling Code
var processor = TagProcessorFactory.GetProcessor(tag.Name);
if (!(processor is UnknownTagProcessor))
{
var data = processor.Process(unitId, tag.Values);
Trace.WriteLine("Tag <{0}> processed. # of IO Items => {1}".FormatWith(tag.Name, data.Count()));
}
as you can see one of my items has dependencies and im trying to execute testing code and i want to pass in mock repositories and cache providers but i can seem to think of a way to do this.
Is this a bad design or anyone have any ideas to fix it to make my factory testable?
Thanks
Since you are using Autofac, you can take advantage of the lookup relationship type:
public class Foo
{
private readonly IIndex<string, ITagProcessor> _tagProcessorIndex;
public Foo(IIndex<string, ITagProvider> tagProcessorIndex)
{
_tagProcessorIndex = tagProcessorIndex;
}
public void Process(int unitId, Tag tag)
{
ITagProcessor processor;
if(_tagProcessorIndex.TryGetValue(tag.Name, out processor))
{
var data = processor.Process(unitId, tag.Values);
Trace.WriteLine("Tag <{0}> processed. # of IO Items => {1}".FormatWith(tag.Name, data.Count()));
}
}
}
See the TypedNamedAndKeysServices wiki article for more information. To register the various processors, you would associate each with its key:
builder.RegisterType<GpsTagProcessor>().Keyed<ITagProcessor>("gps0");
builder.RegisterType<AnalogManagerTagProcessor>().Keyed<ITagProcessor>("analog_manager");
builder.RegisterType<InputManagerTagProcessor>().Keyed<ITagProcessor>("input_manager");
builder
.Register(c => new J1939TagProcessor(new MemcachedProvider(new[] { "localhost" }, new PgnRepository()))
.Keyed<ITagProcessor>("j1939");
Notice we don't register UnknownTagProcessor. That was a signal to the caller of the factory that no processor was found for the tag, which we express using TryGetValue instead.
Using something like StructureMap you could use the ObjectFactory which, when configured would return you a named concrete instance.
http://structuremap.net/structuremap/index.html
I suggest you look through another SO post. It solves several problems at once, including how to replace contructor values - without a mess. Specifically, the parameters to the constructor simply become static fields of a "Context" class, which are read by the constructor of the interior class.
I am using the Entity framework for the first time, and would like to know if I am using in the best practice.
I have created a separate class in my business logic which will handle the entity context. the problem I have, is in all the videos I have seen they usually wrap the context in a using statement to make sure its closed, but obviously I can't do this in my business logic as the context will be closed before I can actually use it?
So is this ok what I'm doing? A couple of examples:
public IEnumerable<Article> GetLatestArticles(bool Authorised)
{
var ctx = new ArticleNetEntities();
return ctx.Articles.Where(x => x.IsApproved == Authorised).OrderBy(x => x.ArticleDate);
}
public IEnumerable<Article> GetArticlesByMember(int MemberId, bool Authorised)
{
var ctx = new ArticleNetEntities();
return ctx.Articles.Where(x => x.MemberID == MemberId && x.IsApproved == Authorised).OrderBy(x => x.ArticleDate);
}
I just want to make sure I'm not building something that's going to die when a lot of people use it?
It really depends on how to want to expose your repository/data store.
Not sure what you mean by "the context will be closed, therefore i cannot do business logic". Do your business logic inside the using statement. Or if your business logic is in a different class, then let's continue. :)
Some people return concrete collections from their Repository, in which case you can wrap the context in the using statement:
public class ArticleRepository
{
public List<Article> GetArticles()
{
List<Article> articles = null;
using (var db = new ArticleNetEntities())
{
articles = db.Articles.Where(something).Take(some).ToList();
}
}
}
Advantage of that is satisfying the good practice with connections - open as late as you can, and close as early as you can.
You can encapsulate all your business logic inside the using statement.
The disadvantages - your Repository becomes aware of business-logic, which i personally do not like, and you end up with a different method for each particular scenario.
The second option - new up a context as part of the Repository, and make it implement IDisposable.
public class ArticleRepository : IDisposable
{
ArticleNetEntities db;
public ArticleRepository()
{
db = new ArticleNetEntities();
}
public List<Article> GetArticles()
{
List<Article> articles = null;
db.Articles.Where(something).Take(some).ToList();
}
public void Dispose()
{
db.Dispose();
}
}
And then:
using (var repository = new ArticleRepository())
{
var articles = repository.GetArticles();
}
Or the third-option (my favourite), use dependency injection. Decouple all the context-work from your Repository, and let the DI container handle disposal of resources:
public class ArticleRepository
{
private IObjectContext _ctx;
public ArticleRepository(IObjectContext ctx)
{
_ctx = ctx;
}
public IQueryable<Article> Find()
{
return _ctx.Articles;
}
}
Your chosen DI container will inject the concrete ObjectContext into the instantiation of the Repository, with a configured lifetime (Singleton, HttpContext, ThreadLocal, etc), and dispose of it based on that configuration.
I have it setup so each HTTP Request gets given a new Context. When the Request is finished, my DI container will automatically dispose of the context.
I also use the Unit of Work pattern here to allow multiple Repositories to work with one Object Context.
You may have also noticed I prefer to return IQueryable from my Repository (as opposed to a concrete List). Much more powerful (yet risky, if you don't understand the implications). My service layer performs the business logic on the IQueryable and then returns the concrete collection to the UI.
That is my far the most powerful option, as it allows a simple as heck Repository, the Unit Of Work manages the context, the Service Layer manages the Business Logic, and the DI container handles the lifetime/disposal of resources/objects.
Let me know if you want more info on that - as there is quite a lot to it, even more than this surprisingly long answer. :)
I would have the ctx as a private variable within each class, then create a new instance of this each time and then dispose when finished.
public class ArticleService
{
private ArticleEntities _ctx;
public ArticleService()
{
_ctx = new ArticleEntities();
}
public IEnumerable<Article> GetLatestArticles(bool Authorised)
{
return _ctx.Articles.Where(x => x.IsApproved == Authorised).OrderBy(x => x.ArticleDate);
}
public IEnumerable<Article> GetArticlesByMember(int MemberId, bool Authorised)
{
return _ctx.Articles.Where(x => x.MemberID == MemberId && x.IsApproved == Authorised).OrderBy(x => x.ArticleDate);
}
public void Dispose()
{
_ctx.Dispose();
_ctx = null;
}
}
Then when calling this.
ArticleService articleService = new ArticleService();
IEnumerable<Article> article = articleService.GetLatestArticles(true);
articleService.Dispose(); // killing the connection
This way you can also add/update other objects within the same context and call a save method which saves any changes to the db through the Entity.
In my experience this code is not good, because you lose the capacity to navigate relationships through navigation properties.
public List<Articles> getArticles( ){
using (var db = new ArticleNetEntities())
{
articles = db.Articles.Where(something).ToList();
}
}
Using this approach you can't use the following code because a.Members is always null( db context is close and cant get data automatically).
var articles = Data.getArticles();
foreach( var a in articles ) {
if( a.Members.any(p=>p.Name=="miki") ) {
...
}
else {
...
}
}
}
Using only a global db context is a bad idea because you must use a delete changes function
in a point of your application yo do this but don't save changes and close the window
var article= globalcontext.getArticleByID(10);
article.Approved=true;
then in another point of application you make some operation and save
//..... something
globalcontext.saveChanges();
in this case previous article approved property is set to modified by entity framework. When you save, approved is set true!!!
Best approach for me is use 1 context per class
You can pass context to another external method if you need
class EditArticle {
private DbEntities de;
private currentAricle;
public EditArticle() {
de = new DbEntities; //inizialize on new istance
}
loadArticleToEdit(Articele a){
// a is from another context
currentArticle= de.Article.Single(p=>p.IdArticle==a.IdArticle){
}
private saveChanges(){
...
pe.saveChanges();
}
}
What you can also do is store your context at a higher level.
E.g., you can have a static class storing the current context:
class ContextManager
{
[ThreadStatic]
public static ArticleEntities CurrentContext;
}
Then, somewhere outside you do something like this:
using (ContextManager.CurrentContext = new ArticleEntities())
{
IEnumerable<Article> article = articleService.GetLatestArticles(true);
}
Then, inside the GetLastestArticles, you just use the same ContextManager.CurrentContext.
Of course, this is just the basic idea. You can make this a lot more workable by using service providers, IoC and such.
You can start preparing Entity Framework from data access layer by creating a generic repository class for all required Entity Framework functions. Then you can used it in Business layer (Encapsulated)
Here are the best practices that I have used for Entity Framework in data, business, and UI layers
Techniques used for this practice:
Applying SOLID architecture principles
Using Repository design pattern
Only one class to go (and you will find it ready)