I want to share a DB context with another method called from outside (inherited class) without creating a new context unless it is being disposed. I want to check the context is disposed so that I could create new context.
It's rest api. There is a bulk upload for multiple entities and I want to share the transaction so if one fail, it will not be committed to DB
Regardless of the comments questioning design quality, valid scenarios exist were the dbContext could be in a disposed state, such as (not a complete list):
For example (within injected dbContext MVC services):
your service iterates though a lower tier of one-or-more service calls, possibly using asynchronous socket handler on a lower tier API library, with each response using the parent requester dbContext.
Your service calls a database job, (asynchronous task or not).
Exception handling logging to database (if the dbContext is already lost - avoid loss of logging debug details)
Note: Long running processes using dbContext like this should follow good practice of avoiding dbContext bloat such as using AsNoTracking() method were possible - as bloat can quickly become a concern.
Performance consideration:
Most trusted option is to recreate the dbContext on each child (api call/async task), but this may incur undesired performance overheads, such as when dealing with 1000's of api iterative calls and atomic unit transactions are not viable.
Solution Tested Using Framework:
Entity Type: Microsoft.EntityFrameworkCore.DbContext
Version=5.0.16.0, Culture=neutral, PublicKeyToken=adb9793829ddae60
Warnings:
Lots of warning advice available on this type of extended dbContext use, such use should be used with caution/avoided where possible.
See warning details : c-sharp-working-with-entity-framework-in-a-multi-threaded-server
Extend you DbContext with partial class Or add method to your existing extended partial class.
FYI - Please comment if still working on updated EntityFrameworkCore libraries.
public partial class FooDbContext : DbContext
{
// Using Type: 5.0.16.0 EntityFrameworkCore.DbContext (confirm if working with any core library upgrades)
public bool IsDisposed()
{
bool result = true;
var typeDbContext = typeof(DbContext);
var isDisposedTypeField = typeDbContext.GetField("_disposed", BindingFlags.NonPublic | BindingFlags.Instance);
if (isDisposedTypeField != null)
{
result = (bool)isDisposedTypeField.GetValue(this);
}
return result;
}
}
Usage:
if (fooDbContext == null || fooDbContext.IsDisposed())
{
// Recreate context
}
Related
Update: The threading issues were caused as a result of ApplicationDbContext being registered as Scoped, and services being
registered as Transient. Registering my ApplicationDbContext as
transient has fixed the threading issue. However: I do not want to lose the Unit of Work and Change Tracking funcionality. Instead I now keep the ApplicationDbContext as Scoped, and fix the issue by using Semaphore to prevent simultaneous calls, as explained in my answer here: https://stackoverflow.com/a/68486531/13678817
My Blazor Server project uses EF Core, with a complex database model (some entities having 5+ levels of child entities).
When the user navigates to a new component from the nav menu, the relevant entities are loaded in OnInitializedAsync (where I inject a service for each entity type). Each service is registered as Transient in startup. The loaded entities are manipulated in this component, and in its child/nested components.
However, this approach resulted in threading issues (different threads concurrently using the same instance of DbContext) when the user would navigate between components while the previous component's services are still loading entities (...a second operation has started...).
Following is the simplified original code causing this error.
Component 1:
#page : "/bases"
#using....
#inject IBasesService basesService
#inject IPeopleService peopleService
<h1>...//Code omitted for brevity
#code{
List<Bases> bases;
List<Person> people;
protected override async Task OnInitializedAsync()
{
bases = await basesService.GetBasesAndRelatedEntities();
people = await peopleService.GetPeopleAndRelatedEntities();
}
Component 2:
#page : "/people"
#using....
#inject IBasesService basesService
#inject IPeopleService peopleService
<h1>...//Code omitted for brevity
#code{
List<Person> people;
protected override async Task OnInitializedAsync()
{
people = await peopleService.GetPeopleAndRelatedEntities();
}
Furthermore, all services have this structure, and are registered in startup as transient:
My BasesService:
public interface IBasesService
{
Task<List<Base>> Get();
Task<List<Base>> GetBasesAndRelatedEntities();
Task<Base> Get(Guid id);
Task<Base> Add(Base Base);
Task<Base> Update(Base Base);
Task<Base> Delete(Guid id);
void DetachEntity(Base Base);
}
public class BasesService : IBasesService
{
private readonly ApplicationDbContext _context;
public BasesService(ApplicationDbContext context)
{
_context = context;
}
public async Task<List<Base>> GetBasesAndRelatedEntities()
{
return await _context.Bases.Include(a => a.People).ToListAsync();
}
//...code ommitted for brevity
DbContext is registered as follows:
services.AddDbContextFactory<ApplicationDbContext>(b =>
b.UseSqlServer(
Configuration.GetConnectionString("MyDbConnection"), sqlServerOptionsAction: sqlOptions =>
{ //Updated according to https://dev-squared.com/2018/07/03/tips-for-improving-entity-framework-core-performance-with-azure-sql-databases/
sqlOptions.EnableRetryOnFailure(
maxRetryCount: 5,
maxRetryDelay: TimeSpan.FromSeconds(5),
errorNumbersToAdd: null);
}
));
Now, my user can switch between /bases and /people by using the nav menu. If they do this quickly, then the other component's await peopleService.GetPeopleAndRelatedEntities(); gets called before the previous component's await peopleService.GetPeopleAndRelatedEntities(); has finished, and this causes an error as follows:
info: Microsoft.EntityFrameworkCore.Database.Command[20101]
Executed DbCommand (9ms) [Parameters=[], CommandType='Text', CommandTimeout='30']
SELECT [**Sensitive DB statement ommitted**]
fail: Microsoft.EntityFrameworkCore.Query[10100]
An exception occurred while iterating over the results of a query for context type '[**ommitted**].Data.ApplicationDbContext'.
System.InvalidOperationException: A second operation was started on this context before a previous operation completed. This is usually caused by different threads concurrently using the same instance of DbContext. For more information on how to avoid threading issues with DbContext, see https://go.microsoft.com/fwlink/?linkid=2097913.
at Microsoft.EntityFrameworkCore.Internal.ConcurrencyDetector.EnterCriticalSection()
at Microsoft.EntityFrameworkCore.Query.Internal.SplitQueryingEnumerable`1.AsyncEnumerator.MoveNextAsync()
System.InvalidOperationException: A second operation was started on this context before a previous operation completed. This is usually caused by different threads concurrently using the same instance of DbContext. For more information on how to avoid threading issues with DbContext, see https://go.microsoft.com/fwlink/?linkid=2097913.
at Microsoft.EntityFrameworkCore.Internal.ConcurrencyDetector.EnterCriticalSection()
at Microsoft.EntityFrameworkCore.Query.Internal.SplitQueryingEnumerable`1.AsyncEnumerator.MoveNextAsync()
dbug: Microsoft.Azure.SignalR.Connections.Client.Internal.WebSocketsTransport[12]
Message received. Type: Binary, size: 422, EndOfMessage: True.
dbug: Microsoft.Azure.SignalR.ServiceConnection[16]
Received 422 bytes from service 468a12a0...
warn: Microsoft.AspNetCore.Components.Server.Circuits.RemoteRenderer[100]
Unhandled exception rendering component: A second operation was started on this context before a previous operation completed. This is usually caused by different threads concurrently using the same instance of DbContext. For more information on how to avoid threading issues with DbContext, see https://go.microsoft.com/fwlink/?linkid=2097913.
System.InvalidOperationException: A second operation was started on this context before a previous operation completed. This is usually caused by different threads concurrently using the same instance of DbContext. For more information on how to avoid threading issues with DbContext, see https://go.microsoft.com/fwlink/?linkid=2097913.
at Microsoft.EntityFrameworkCore.Internal.ConcurrencyDetector.EnterCriticalSection()
at Microsoft.EntityFrameworkCore.Query.Internal.SplitQueryingEnumerable`1.AsyncEnumerator.MoveNextAsync()
at Microsoft.EntityFrameworkCore.EntityFrameworkQueryableExtensions.ToListAsync[TSource](IQueryable`1 source, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.EntityFrameworkQueryableExtensions.ToListAsync[TSource](IQueryable`1 source, CancellationToken cancellationToken)
at [**path to service ommitted**]...cs:line 35
at [**path to component ommitted**].razor:line 77
at Microsoft.AspNetCore.Components.ComponentBase.RunInitAndSetParametersAsync()
fail: Microsoft.AspNetCore.Components.Server.Circuits.CircuitHost[111]
Unhandled exception in circuit 'cvOyWXdG_oikG_YJe2ehrsHsI3VQDJw2U8YIySmroTM'.
System.InvalidOperationException: A second operation was started on this context before a previous operation completed. This is usually caused by different threads concurrently using the same instance of DbContext. For more information on how to avoid threading issues with DbContext, see https://go.microsoft.com/fwlink/?linkid=2097913.
at Microsoft.EntityFrameworkCore.Internal.ConcurrencyDetector.EnterCriticalSection()
at Microsoft.EntityFrameworkCore.Query.Internal.SplitQueryingEnumerable`1.AsyncEnumerator.MoveNextAsync()
at Microsoft.EntityFrameworkCore.EntityFrameworkQueryableExtensions.ToListAsync[TSource](IQueryable`1 source, CancellationToken cancellationToken)
at Microsoft.EntityFrameworkCore.EntityFrameworkQueryableExtensions.ToListAsync[TSource](IQueryable`1 source, CancellationToken cancellationToken)
at [**path to service ommitted**].cs:line 35
at [**path to component ommitted**].razor:line 77
I read through everything Stack Overflow and MS docs has available regarding this topic, and adapted my project according to the recommended approach of using DbContextFactory:
Blazor concurrency problem using Entity Framework Core
https://stackoverflow.com/a/58047471/13678817
Let's say I have the following DB model:
Each AlphaObject, has many BetaObjects, which has many CharlieObjects. EachCharlieObject has 1 BetaObject, and each BetaObject has 1 AlphaObject.
I adapted all services to create a new DbContext with DbContextFactory, before each operation:
public AlphaObjectsService(IDbContextFactory<ApplicationDbContext> contextFactory)
{
_contextFactory = contextFactory;
}
public async Task<List<AlphaObject>> GetAlphaObjectAndRelatedEntities()
{
using (var _context = _contextFactory.CreateDbContext())
return await _context.AlphaObjects.Include(a => a.BetaObjects).ThenInclude(b => b.CharlieObjects).ToListAsync();
}
Where before, I would load a List<AlphaObject> alphaObjects, and include all related BetaObject entities (and in turn their related CharlieObject entities) in the service, I can then later load a list of List<BetaObject> betaObjects, and without explicitly including their related AlphaObject or CharlieObjects, it would already be loaded if it was loaded before.
Now, when working with a 'DbContext per Operation' - many of my related entities are null if I don't load them again explicitly. I am also worried about manipulating entities & their related entities, and updating all these changes, without the normal lifetime of a DbContext with Change Tracking. EF Core documentation states that the normal lifetime of a DbContext should be:
Create the DbContext instance
Track some entities
Make some changes to the entities
Call SaveChanges to update the database
Dispose the DbContext instance.
In order to solve my threading (concurrent access of the same DbContext) errors, but continue to manipulate my entities in the manner that EF Core was intended to be used:
Should I extend the lifetime of my DbContext to the lifetime of my main component in which the entities are loaded from the database?
In this way, after loading one entity, with all its related entities, I don't need to load all already loaded entities for another entity type. I would also have other benefits such as Change Tracking for the lifetime of the component, and all changes to tracked entities will be saved when calling context.SaveChangesAsync(). However, I will need to manually dispose each context created with DbContextFacoty.
As per MS Docs, it seems I would have to access the database directly from the component in order to implement IDisposable (I will need to create the Context directly in my component, and not in a service - as with the MSDocs sample app: https://learn.microsoft.com/en-us/aspnet/core/blazor/blazor-server-ef-core?view=aspnetcore-5.0). Is it really optimal/recommended to create and access the DbContext directly from within components? Otherwise, instead of implementing IDisposable, would using OwningComponentBase have the exact same capability to dispose the context after the component's lifetime ends, except I can use my existing services with this?
Can I continue to dispose my new DbContext after each service operation - using (var _context = _contextFactory.CreateDbContext())?
Then, must I simply ensure that each time I load a different entity type, I should also load all the required related entities again? I.e. return await context.AlphaObject.Include(a => a.BetaObject).ThenInclude(b => b.CharlieObject).ToListAsync();, and when loading a list of CharlieObject, again I should explicitly include BetaObject and AlphaObject? Will I be able to still make changes to AlphaObject and it's related Beta- and CharlieObjects throughout my child components, and then when finished with all the changes, will making context.Entry(AlphaObject).State = EntityState.Modified & calling context.SaveChangesAsync() also update changes that were made to the BetaObjects and CharlieObjects that are related to the AlphaObject? Or would one need to change the state of each entity to EntityState.Modified?
In short, I would love to understand the correct way to ensure related entities are loaded (and manipulated & updated) properly when working outside a single DbContext lifetime, as this seems to be the recommended approach. In the meantime I will go ahead an adapt my project to use new context per service operation, and continue to 'learn-as-I-go-along'. I will update this question as I learn more.
tl;dr How can I use Entity Framework in a multithreaded .NET Core API application even though DbContext is not threadsafe?
Context
I am working on a .NET Core API app exposing several RESTful interfaces that access the database and read data from it, while at the same time running several TimedHostedServices as background working threads that poll data regularly from other webservices and store them into the database.
I am aware of the fact that DbContext is not threadsafe. I read a lot of docs, blog Posts and answers here on Stackoverflow, and I could find a lot of (partly contradictory) answers for this but no real "best practice" when also working with DI.
Things I tried
Using the default ServiceLifetime.Scoped via the AddDbContext extension method results in exceptions due to race conditions.
I don't want to work with locks (e.g. Semaphore), as the obvious downsides are:
the code is polluted with locks and try/catch/finally for safely releasing the locks
it doesn't really seem 'robust', i.e. when I forget to lock a region that accesses the DbContext.
it seems redundant and 'unnatural' to artificially syncronize db access in the app when working with a database that also handles concurrent connections and access
Not injecting MyDbContext but DbContextOptions<MyDbContext> instead, building the context only when I need to access the db, using a using statement to immediatelly dispose it after the read/write seems like a lot of resource usage overhead and unnecessarily many connection opening/closings.
Question
I am really puzzled: how can this be achived?
I don't think my usecase is super special - populating the db from a Background worker and querying it from the web API layer - so there should be a meaningful way of doing this with ef core.
Thanks a lot!
You should create a scope whenever your TimedHostedServices triggers.
Inject the service provider in your constructor:
public MyServiceService(IServiceProvider services)
{
_services = services;
}
and then create a scope whenever the task triggers
using (var scope = _services.CreateScope())
{
var anotherService = scope.ServiceProvider.GetRequiredService<AnotherService>();
anotherService.Something();
}
A more complete example is available in the doc
Another approach to create own DbContextFactory and instantiate new instance for every query.
public class DbContextFactory
{
public YourDbContext Create()
{
var options = new DbContextOptionsBuilder<YourDbContext>()
.UseSqlServer(_connectionString)
.Options;
return new YourDbContext(options);
}
}
Usage
public class Service
{
private readonly DbContextFactory _dbContextFactory;
public Service(DbContextFactory dbContextFactory)
=> _dbContextFactory = dbContextFactory;
public void Execute()
{
using (var context = _dbContextFactory.Create())
{
// use context
}
}
}
With factory you don't need to worry about scopes anymore, and make your code free of ASP.NET Core dependencies.
You will be able to execute queries asynchronously, which not possible with scoped DbContext without workarounds.
You always be confident about what data saved when calling .SaveChanges(), where with scoped DbContext there are possibilities that some entity were changed in other class.
We're using ASP.NET Entity Framework Core for querying our MSSQL database in our Web API app. Sometimes when we have big traffic, querying to DB ends with this error:
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
I wonder if our pattern of using DbContext and querying is correct or if I am missing some using/dispose pattern and error is caused by some memory leak (after some research I read then I should not use using because the lifetime is managed by the framework). I am following documentation...
My connectionString:
"myConnection": "Server=xxx;Database=xxx;user id=xxx;password=xxx;Max Pool Size=200;Timeout=200;"
My Startup.cs
public void ConfigureServices(IServiceCollection services)
{
.....
// scoped context
services.AddDbContext<MyDbContext>(
options => options.UseSqlServer(this.Configuration.GetConnectionString("myConnection")));
}
then in controllers I used dbcontext by dependency injection:
public class MyController : Controller
public MyController (MyDbContext context)
{
this.Context = context;
}
public ActionResult Get(int id)
{
// querying
return this.Context.tRealty.Where(x=>x.id == id).FirstOrDefault();
}
Should I use something like:
using (var context = this.Context)
{
return this.Context.tRealty.Where(x => x.id == id).FirstOrDefault();
}
But I think that this is bad pattern when I am using dependency injection of DbContext.
I think problem was caused by storing objects from database context queries to In memory cache. I had one big LINQ query to database context with some other subqueries inside. I called FirstOrDefault() on the end of main query but not inside subqueries. Controller was fine with it, it materialize queries by default.
return this.Context.tRealty.AsNoTracking().Where(
x => x.Id == id && x.RealtyProcess == RealtyProcess.Visible).Select(
s => new
{ .....
// subquery
videos = s.TVideo.Where(video => video.RealtyId == id && video.IsPublicOnYouTube).
Select(video => video.YouTubeId).ToList()), // missing ToList()
.....
}).FirstOrDefault();
And there was problem - subqueries were holding connection to database context when they where storing to In memory cache. When I implemented Redis distributed cache, it was first failing on some strange errors. It helps when I write ToList() or FirstOrDefault() to all my subqueries because distributed cache needs materialized objects.
Now I have all my queries materialized explicitly and I got no max pool size was reached error. So that one must be careful when stored objects from database context queries to In memory cache. It is need to materialize all queries to avoid to holding connection somewhere in memory.
You can set the lifetime of the DbContext in your startup.cs, see if this helps:
services.AddDbContext<MyDbContext>(options => options
.UseSqlServer(connection), ServiceLifetime.Scoped);
Also if your query is a simple read you can remove tracking by using .AsNoTracking().
Another way to improve your throughput is to prevent locks by using a transaction block with IsolationLevel.ReadUncommitted for simple reads.
You can also use the Snapshot isolation level - which is slightly more restrictive - if you do not want dirty reads.
TransactionOptions transactionOptions = new TransactionOptions() { IsolationLevel = IsolationLevel.ReadUncommitted};
using (TransactionScope transactionScope = new TransactionScope(TransactionScopeOption.Required, transactionOptions))
{
// insert magic here
}
Edit : As the author of the question mentioned, the above code is not (yet?) possible in EF Core.
A workaround can be found here using an explicit transaction:
using (var connection = new SqlConnection(connectionString))
{
connection.Open();
using (var transaction = connection.BeginTransaction())
{
// transaction.Commit();
// transaction.Rollback();
}
}
I have not tested this.
Edit 2: Another untested snippet where you can have executed commands to set isolation level:
using (var c1= new SqlConnection(connectionString))
{
c1.Open();
// set isolation level
Exec(c1, "SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;");
Exec(c1, "BEGIN TRANSACTION;");
// do your magic here
}
With Exec:
private static void Exec(SqlConnection c, string s)
{
using (var m = c.CreateCommand())
{
m.CommandText = s;
m.ExecuteNonQuery();
}
}
Edit 3: According to that thread, Transactions will be supported from .NET Core version 1.2 onwards.
#mukundabrt this is tracked by dotnet/corefx#2949. Note that
TransactionScope has already been ported to .NET Core but will only be
available in .NET Core 1.2.
I am adding an alternative answer, in case anyone lands here with a slightly different root cause, as was the case for my .NET Core MVC application.
In my scenario, the application was producing these "timeout expired... max pool size was reached" errors due to mixed use of async/await and Task.Result within the same controller.
I had done this in an attempt to reuse code by calling a certain asynchronous method in my constructor to set a property. Since constructors do not allow asynchronous calls, I was forced to use Task.Result. However, I was using async Task<IActionResult> methods to await database calls within the same controller. We engaged Microsoft Support, and an Engineer helped explain why this happens:
Looks like we are making a blocking call to an Async method inside
[...] constructor.
...
So, basically something is going wrong in the call to above
highlighted async method and because of which all the threads listed
above are blocked.
Looking at the threads which are doing same operation and blocked:
...
85.71% of threads blocked (174 threads)
We should avoid mixing async and blocking code. Mixed async and
blocking code can cause deadlocks, more-complex error handling and
unexpected blocking of context threads.
https://msdn.microsoft.com/en-us/magazine/jj991977.aspx
https://blogs.msdn.microsoft.com/jpsanders/2017/08/28/asp-net-do-not-use-task-result-in-main-context/
Action Plan
Please engage your application team to revisit the application code of above mentioned method to understand what is going
wrong.
Also, I would appreciate if you could update your application logic to
not mix async and blocking code. You could use await Task instead of
Task.Wait or Task.Result.
So in our case, I pulled the Task.Result out of the constructor and moved it into a private async method where we could await it. Then, since I only want it to run the task once per use of the controller, I store the result to that local property, and run the task from within that method only if the property value is null.
In my defense, I expected the compiler would at least throw a warning if mixing async and blocking code is so problematic. However, it seems obvious enough to me, in hindsight!
Hopefully, this helps someone...
I'm trying to figure out the best way to manage the DbContext. I've seen code samples that don't dispose and I've seen people say that that is a bad idea. Is it appropriate for me to do something like below? Also, should I put every transaction, including reads, in a new DbContext? This might be another question, but is the part about the EntityState necessary?
public abstract class GenericRepository<T> where T : EntityData
{
protected MyDbContext Context
{
get { return new MyDbContext(); }
}
public T Save(T obj)
{
T item;
using (var context = Context)
{
var set = context.Set<T>();
if (String.IsNullOrEmpty(obj.Id))
item = set.Add(obj);
else
{
item = set.Find(obj.Id);
item = obj;
}
// taken from another code sample
var entry = context.Entry(item);
if (entry.State == EntityState.Detached)
{
//Need to set modified so any detached entities are updated
// otherwise they won't be sent across to the db.
// Since it would've been outside the context, change tracking
//wouldn't have occurred anyways so we have no idea about its state - save it!
set.Attach(item);
context.Entry(item).State = EntityState.Modified;
}
context.SaveChanges();
}
return item;
}
}
EDIT
I also have an extended class that implements this function below. The context is not being wrapped in a using statement in this query, so I'm a little suspicious of my code.
public IQueryable<T> FindByAccountId(string accountId)
{
return from item in Context.Set<T>()
let user = UserRepository.FindByAccountId(accountId).FirstOrDefault()
where item.UserId == user.Id
select item;
}
Contexts should really be on a per request basis. The request comes in and a new context is created. This context is used for the remainder of the request then disposed of at the end of the request accordingly. This gives you the benefit of request long transactions, and as highlighted by HamidP, you also have the added benefit of cached entities; meaning that any entities loaded into the context can be loaded by retrieved without Entity Framework needing to query the database.
If you're using any kind of inversion of control container such as StructureMap then you can easily create HTTP request bound contexts by a configuration such as:
this.For<DbContext>().HybridHttpOrThreadLocalScoped().Use<DbContext>();
You're then able to inject your DbContext (or a derivative of it) into your repository and leave your IOC container of choice to dispose of the context at the end of the request. If you were to inject the same context into another repository then you'd receive the same instance of the context.
I hope this helps!
No, it should not
Best approach here is to assign a context just for a request. you should attach a context to an incoming request and dispose your context when request is finished. In this approach you save the overhead of creating a context for every transaction and also benefit from caching mechanism of context because each context has it's inside cache and a request may access the data it had access recently.
Creating a context for each transaction is not as bad as having a long life context!! Don't ever do that, long life contexts result in many concurrency issue and the cache becomes stale and memory consumption grows high and higher and you should maintain your application in future by miracles.
I used lot of model for connecting to db, in my last project that i worked with C# & entity framework, i created static class for db connecting but i had problem with opening and closing connection for that give me error when more than 10-15 requests come together, i solved it with changing method of connecting to db with i connect now per request and removed all static methods and classes.
Now i want to know,
What is best model for making connection?
Should i close it after every query and open it before using or ...?
A connection in static class is good model (that i don`t need to
create it, every time)?
Is there a good design pattern for this problem?
All of it is for the same question What is the best method for
making database connection (static, abstract, per request, ...)?
For example i working on a sms sender web panel, I should send 100K sms per second, these sms collect with others and make a package that every package have 1~20 sms then i need to send 5K~100K packages per one second and when i send a package i should do these steps:
Update single sms to delivered or not delivered
Update user balance if delivered decrease user balance in useraccounts table
Update number of sms send count in user table
Update number of sms send count in mobile number table
Update number of sms send count in sender number table
Update package for delivered and failed sms in package table
Update package for how thread send this package in package table
Update thread table for how many sms send it by this tread and how many failed
Add account document for this transactions in AccountDocument table
All steps and lot of other things like logs, user interface and monitoring widgets, that should doing and i need DB connection for doing every single of this transactions.
Now, What is best model for connecting to DB? By human request or by thread request or by every single transaction..
answers to your questions:
Close it. .NET does connection pooling for you under the hood.
Create it. use the using (Connection conn = new ....) each time - this way, you'll make the most out of the .NET pooling mechanism.
you can use the .NET ThreadPool (or your own custom one), define the ThreadPool to use solely 10 thread in parallel and Enqueue work items one after another. this way no more then 10 connections will be used in the same time + it'll probably work faster.
More about Custom ThreadPools: Custom ThreadPool Implementation
Per instance.
Here's my suggestion for an architecture:
Create a database table (queue) for pending SMS to be sent out.
each row will contain all the information needed for the sms + the current status.
create a worker process, perhaps a windows service which will sample this table constantly - let's say, each 5 seconds. it will select the TOP ~20 SMS with status = 'pending to be sent' (should be represented as int). and will update the status to 'sending'
each sms will be sent out using a custom threadpool on the windows service side.
in the end of the process, ALL the processed sms status will be updated to 'done' using a CTE (common table expression - you can send a cte with all the sms rows ids that have just been process to do a 'bulk update' to 'done' status).
you could make the status update stored procedure to be the same one as the 'getpending'. this way, you could select-for-update with no lock and make the database work faster.
this way, you can have more than just one processor service running (but then you'll have to loose the nolock).
remember to avoid as much locking as possible.
by the way, this is also good because you could send SMS from any place in your system by simply adding a row to the pending SMS table.
And one more thing, i would not recommend to use entity framework for this, as it has too much going on under the hood. All you need for this kind of task is to simply call 3-4 stored procedures, and that's it. Maybe take a look at Dapper-dot-NET - its a very lightweight MicroDal framework which in most cases works more than 10 times faster than EF (Entity Framework)
1. Should i close it after every query?
.Net does that for you so let it handle it, that's a garbage collector task. So don't bother disposing your objects manually, this is a good answer by Jon Skeet: https://stackoverflow.com/a/1998600/544283. However you could use the using(IDisposable){ } statement to force the GC to do it's work. Here is a nice article about resources reallocation: http://www.codeproject.com/Articles/29534/IDisposable-What-Your-Mother-Never-Told-You-About.
2. A connection in static class is good?
Never make a data context static! Data contexts are not thread safe or concurrent safe.
3. Is there a good design pattern for this problem?
As Belogix mentioned dependency injection and unit of work patterns are great, in fact entity framework is a unit of work itself. DI and UoW are a bit overrated though, it's not easy to implement if it's your first time handling an IoC container which if you're going that path I'd recommend Ninject. One other thing is you don't really need DI if you're not gonna run tests, the awesomeness of these patterns is to decouple, so you can test and mock without sweat.
In-short: If you're gonna run test against your code go for these patterns. If not, I'm providing you an example about how you could share your data context among the services you'd like. This is the answer to your fourth question.
4. What is the best method for making database connection (static, per request)?
Your context service:
public class FooContextService {
private readonly FooContext _ctx;
public FooContext Context { get { return _ctx; } }
public FooContextService() {
_ctx = new FooContext();
}
}
Other services:
public class UnicornService {
private readonly FooContext _ctx;
public UnicornService(FooContextService contextService) {
if (contextService == null)
throw new ArgumentNullException("contextService");
_ctx = contextService.Context;
}
public ICollection<Unicorn> GetList() {
return _ctx.Unicorns.ToList();
}
}
public class DragonService {
private readonly FooContext _ctx;
public DragonService(FooContextService contextService) {
if (contextService == null)
throw new ArgumentNullException("contextService");
_ctx = contextService.Context;
}
public ICollection<Dragon> GetList() {
return _ctx.Dragons.ToList();
}
}
Controller:
public class FantasyController : Controller {
private readonly FooContextService _contextService = new FooContextService();
private readonly UnicornService _unicornService;
private readonly DragonService _dragonService;
public FantasyController() {
_unicornService = new UnicornService(_contextService);
_dragonService = new DragonService(_contextService);
}
// Controller actions
}
Second thoughts (almost an edit):
If you need your context not to create the proxies for your entities therefore not having lazy loading either, you could overload your context service as follows:
public class FooContextService {
private readonly FooContext _ctx;
public FooContext Context { get { return _ctx; } }
public FooContextService() : this(true) { }
public FooContextService(bool proxyCreationEnabled) {
_ctx = new FooContext();
_ctx.Configuration.ProxyCreationEnabled = proxyCreationEnabled;
}
}
NOTE:
If you set the proxy creation enabled to false you will not have lazy loading out of the box.
If you have api controllers you don't want to deal with any full blown object graph.
EDIT:
Some reading first:
This link relates to a pre-release version of EF6: Entity Framework and Async.
Scott Allen posted about this in his blog: Async in Entity Framework 6.0.
If you're going to use Unit of Work I'd recommend to read this: Make the DbContext Ambient with UnitOfWorkScope.
Darin Dimitrov's answer on Do asynchronous operations in ASP.NET MVC use a thread from ThreadPool on .NET 4.
Get this done:
(_context as IObjectContextAdapter).ObjectContext.Connection.Open();
This is a great article about Managing Connections and Transactions.
Entity framework exposes EntityConnection through the Connection property. Read as: public sealed class EntityConnection : DbConnection.
Considerations for managing connections: (taken from previous link)
The object context will open the connection if it is not already open before an operation. If the object context opens the connection during an operation, it will always close the connection when the operation is complete.
If you manually open the connection, the object context will not close it. Calling Close or Dispose will close the connection.
If the object context creates the connection, the connection will always be disposed when the context is disposed.
In a long-running object context, you must ensure that the context is disposed when it is no longer required.
Hope it helps.
I think per request scales the best. Use a thread-safe connection pool and make the connection scope coincide with the unit of work. Let the service that's responsible for transactional behavior and units of work check out the connection, use it, and return it to the pool when the unit of work is either committed or rolled back.
UPDATE:
10-12 seconds to commit a status update? You've done something else wrong. Your question as written is not sufficient to provide a suitable answer.
Daily NASDAQ volume is 1.3B transactions, which on an 8 hour day works out to ~45K transactions per second. Your volume is 2X that of NASDAQ. If you're trying to do it with one machine, I'd say that NASDAQ is using more than one server.
I'd also wonder if you could do without that status being updated using ACID. After all, Starbucks doesn't use two-phase commit. Maybe a better solution would be to use a producer/consumer pattern with a blocking queue to update those statuses when you can after they're sent.