How to implement scheduled task on EF (DB first) entities? - c#

I am quite new to Asp.net and have a website using Entity Framework. Every night, I need to do some work on my Person entities.
Thus I installed Quartz.Net et tried to use it this way in Global.asax :
<%# Application Language="C#" %>
<%# Import Namespace="Quartz" %>
<%# Import Namespace="Quartz.Impl" %>
<script runat="server">
private IScheduler Scheduler { get; set; }
void Application_Start(object sender, EventArgs e)
{
Scheduler = StdSchedulerFactory.GetDefaultScheduler();
Scheduler.Start();
IJobDetail dailyReset = JobBuilder.Create<ApplicationJobs.DailyReset>()
.WithIdentity("dailyReset", "group1")
.Build();
ITrigger dailyResetTrigger = TriggerBuilder.Create()
.WithIdentity("dailyResetTrigger", "group1")
.StartAt(DateBuilder.DateOf(3, 0, 0))
.WithSimpleSchedule(x => x
.WithIntervalInHours(24)
.RepeatForever())
.Build()
Scheduler.ScheduleJob(dailyReset, dailyResetTrigger);
}
</script>
Then my ApplicationJobs class :
public class ApplicationJobs : System.Web.HttpApplication
{
public class DailyReset : IJob
{
public void Execute(IJobExecutionContext context)
{
using (var uow = new UnitOfWork())
{
foreach (Person person in uof.Context.Persons)
{
//do something
}
}
}
}
}
And finally the UnitOfWork :
public class UnitOfWork : IDisposable
{
private const string _httpContextKey = "_unitOfWork";
private MyEntities _dbContext;
public static UnitOfWork Current
{
get { return (UnitOfWork)HttpContext.Current.Items[_httpContextKey]; }
}
public UnitOfWork()
{
HttpContext.Current.Items[_httpContextKey] = this;
}
public MyEntities Context
{
get
{
if (_dbContext == null)
_dbContext = new MyEntities();
return _dbContext;
}
}
}
But using (var uow = new UnitOfWork()) is not working because of HttpContext.Current.Items[_httpContextKey] = this; in uow's constructor ; I read that HttpContext.Current was not available in Application_Start.
In read related posts, notably this one but I don't really understand if I do need to create something like UnitOfWorkScope described here, or if there could be a way to do that as it currently is.
Then is there any clean and safe way to schedule some task which would use my UnitOfWork in order to update entities ?
Thanks a lot.

Your problem come from the fact that when your job will run, it wil be called by the quartz scheduller, not from an http request (even if the job is in an ASP website).
So HttpContext.Current will be most likely null.
Keep in mind when using Quartz that you shoudl see it as a totally paralle process to your website, almost like a separate service.
If you need to pass "argument" to your job, you can use the job data map
JobDataMap dataMap = jobContext.JobDetail.JobDataMap;
(see here for more info : http://www.quartz-scheduler.net/documentation/quartz-2.x/tutorial/more-about-jobs.html)
If you need to access your job, just use the same key and group when creating a jobkey (the one you used in WithIdentity
Note that it is recommended for entity context to be alive only for the time of the action you need it, so you could probably just instantiate a new context at the start of the job and dispose it at the end.

The issue is that you're not executing the job within a web request. As in, web request starts, you check outstanding work, do work if required, request ends. Without a web request you have no context - as the context is for the lifetime of the web request and accessible via the request thread.
Another issue you're going to have is app-pool, using default settings, may end if there's no activity. So you would need a way to keep it alive.
An alternative method is to use something like win task scheduler to hit the website to kick off the work.

Related

Can I use service to operate dbcontext in blazor like in asp.net core mvc?

In my asp.net core mvc project, I usually use this service for business operations
services.AddDbContextPool<AppDbContext>(option => {
option
.UseMySql(
Configuration.GetConnectionString("SqliteConstr"),
new MySqlServerVersion(new Version(5, 5, 62)),
//错误重试
MysqlOpt => MysqlOpt.EnableRetryOnFailure()
);
});
services.AddScoped<GameService>();
public class GameService
{
private AppDbContext DbContext { get; set; }
public GameService(AppDbContext dbContext)
{
//DbContext = contextFactory.CreateDbContext();
DbContext = dbContext;
}
public async Task<Game[]> GetGamesAsync()
{
return await DbContext.Games.ToArrayAsync();
}
}
Now I plan to migrate to the blazor server, but the official tutorial asked me to use the factory mode to inject DbFactory into the blazor component, like the following, I am not very familiar with blazor, is this a requirement in blazor?
services.AddDbContextFactory<ApplicationDbContext>(options =>
{
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"));
});
// in blazor page core
#inject ItemService ItemService
using var context = DbFactory.CreateDbContext();
Filters.Loading = true;
var contact = await context.Contacts.FirstAsync(
c => c.Id == Wrapper.DeleteRequestId);
if (contact != null)
{
context.Contacts.Remove(contact);
await context.SaveChangesAsync();
}
Filters.Loading = false;
await ReloadAsync();
AddScoped in Blazor Server is almost equivalent to AddSingleton in a normal http based anything (webapi, asp mvc, razor pages etc.). Not quite, but every user will get one, and only one DbContext. Blazor Server works over a WebSocket connection. There is no traditional request lifetime. There is one pipe that continiously sends stuff back and forth. Hence your scoped service will be used for that one connection until it dies. Multiple UI actions will end up using the exact same instance to query with. Which can cause problems, but will most deffinitely work if you test this with only a single query.
Using the factory and creating a DbContext when its needed solves this nicely.
You can rewrite your GameService to use the factory.
public class GameService
{
private readonly IDbContextFactory<ApplicationDbContext> factory;
public GameService(IDbContextFactory<ApplicationDbContext> factory)
{
this.factory = factory;
}
public async Task<Game[]> GetGamesAsync()
{
using (var context = factory.CreateDbContext()) {
return await context.Games.ToArrayAsync();
}
}
}
Then you can inject your GameService as a Singleton or Scoped service.
To register a DbContextFactory is the recommended way to use DbContext's from Blazor Server applications.
When calling the AddDbContext() method the DbContext would be registerd with a scoped lifetime. This can lead to concurrency issues that arise whenever two or more operations happen during the same time on one DbContext. An InvalidOperationException gets thrown.
By registering the DbContextFactory and subsequently creating new DbContexts whenever they are needed this risk can be mitigated.
The reason for it is that the Blazor Server hosting model only consists of one initial HttpRequst with subsequent changes getting delivered over the websockets connection. The lifetime of the DbContext is therefore tied to the time the user has the webpage opened and resembles a singleton. Because there is only one instance for the whole app, concurrency issues happen frequently.

Issue when upgrading Autofac with async tasks and owned instances

I have an issue with Autofac after upgrading from 4.9.2 to 5.2 in my ASP.NET MVC application.
I make use of Func<Owned<T>> factory pattern in the Controller because an Controller Action starts a Long running Task and will run longer than the request exists. In that Task I am resolving other instances.
This worked fine in Autofac 4.9.2. But after upgrading to Autofac 5.2 the parent Lifetime scope (AutofacWebRequest) gets disposed and it not possible to resolve instances within the owned instance anymore.
Instances cannot be resolved and nested lifetimes cannot be created from this LifetimeScope as it (or one of its parent scopes) has already been disposed.
Is there something I can do to work around this or is there a best practice?
Controller Code:
private readonly Func<Owned<IBusinessLogic>> _businessLogicFactory;
public ActionResult Index()
{
var businessLogic = _businessLogicFactory();
var unitOfWorkFactory = _unitOfWorkFactory;
Task.Run(() =>
{
System.Threading.Thread.Sleep(5000); // Sleep simulates that it may take some time until other instances are resolved
using (businessLogic)
{
var task = businessLogic.Value.DoHardBusinessAsync();
task.Wait();
}
});
return View();
}
Business Logic Code (also using a factory):
public class BusinessLogic : IBusinessLogic
{
private readonly Func<Owned<OtherBusinessLogic>> _otherBusinessLogicFactory;
public BusinessLogic(Func<Owned<OtherBusinessLogic>> otherBusinessLogicFactory)
{
_otherBusinessLogicFactory = otherBusinessLogicFactory;
}
public async Task DoHardBusinessAsync()
{
using (var otherBusiness = _otherBusinessLogicFactory())
{
await otherBusiness.Value.DoHardBusinessAsync();
}
}
}
You could try to create a new lifetime scope that is independent of the request scope to be used with your long running task like so
Task.Run(() =>
{
using (var scope = container.BeginLifetimeScope())
{
System.Threading.Thread.Sleep(5000); // Sleep simulates that it may take some time until other instances are resolved
using (businessLogic)
{
var task = businessLogic.Value.DoHardBusinessAsync();
task.Wait();
}
}
});
Look at this question for ideas on how to get a hold of the container
Retrieving Autofac container to resolve services
#NataliaMuray's approach is awesome - one downside of it is that it tends to encourage Service Locator style resolving rather than constructor injection. This can tend to "hide" dependencies, making it harder to identify the dependencies of a given class.
One potential solution is to introduce the notion of a dependency that is explicit that it wraps another dependency that you want to resolve outside the normal web request's lifetime scope.
The code might look something like:
public class AsyncRunner : IAsyncRunner
{
public ExecutionResult TryExecute<TService>(Action<TService> toEvaluate, string #exceptionErrorMessage, int timeoutMilliseconds, string additionalErrorInformation = "")
{
try
{
var task = new Task(() =>
{
using (var scope = container.BeginLifetimeScope())
{
var service = scope.Resolve<TService>();
toEvaluate(service);
}
});
task.ContinueWith(t => { /* logging here */, TaskContinuationOptions.OnlyOnFaulted | TaskContinuationOptions.ExecuteSynchronously).SuppressExceptions();
task.Start();
var completedWithinTime = task.Wait(timeoutMilliseconds);
return completedWithinTime ? ExecutionResult.Ok : ExecutionResult.TimedOut;
}
catch (Exception e)
{
/* logging here */
return ExecutionResult.ThrewException;
}
}
}
Register IAsyncRunner with Autofac as well.
And then your dependency, instead of
private readonly Func<Owned<IBusinessLogic>> _businessLogicFactory;
would be
private readonly IAsyncRunner<IBusinessLogic>> _businessLogic;
And instead of:
var businessLogic = _businessLogicFactory();
var unitOfWorkFactory = _unitOfWorkFactory;
Task.Run(() =>
{
System.Threading.Thread.Sleep(5000); // Sleep simulates that it may take some time until other instances are resolved
using (businessLogic)
{
var task = businessLogic.Value.DoHardBusinessAsync();
task.Wait();
}
});
would be:
//var businessLogic = _businessLogicFactory();
var unitOfWorkFactory = _unitOfWorkFactory;
Task.Run(() =>
{
System.Threading.Thread.Sleep(5000); // Sleep simulates that it may take some time until other instances are resolved
_businessLogic.TryExecute(z => {
var task = z.Value.DoHardBusinessAsync();
task.Wait();
});
});
The advantage of this style is that the property and constructor injection makes clear what the dependencies are, and how they are being used (i.e. the declaration makes clear that it will be resolved outside the context of the standard lifetime scope). Note you don't need to use Owned with my suggestion (disposal of the lifetime scope that is manually constructed will be sufficient). I have removed the use of Func, but you could use Func or Lazy if you really needed it alongside my suggestion.

Ninject with EF multithreading

Good morning everyone!
I just started working in the project where I see there is an memory leak.
The situtation is as below. There is a console application which basically runs all the time in the while(true) loop.
There are bunch on classes which does some logic in the loop.
Each class has Execute() method where inside create uses Task.Run() method where the call is not awaited by anyone.
The list of above classes are called Engines. All engines are stateless classes which are stored in in array in main Program.cs class.
The code basically looks like:
private static List<BaseEngine> Engines;
public static void Main(string[] args)
{
InitializeDI();
RunProgram();
}
private static void RunProgram()
{
while (true)
{
try
{
foreach (var engine in Engines)
{
engine.Execute();
}
}
catch (Exception ex)
{
//handle
}
finally
{
Thread.Sleep(TimeSpan.FromSeconds(3));
}
}
}
private static void InitializeDI()
{
_kernel = new StandardKernel();
ServiceLocator.SetLocatorProvider(() => new NinjectServiceLocator(_kernel));
NinjectConfig.Setup(_kernel);
}
The sample engine looks like:
public class SampleEngine : BaseEngine
{
public override void Execute(Task task)
{
var someService = ServiceLocator.Current.GetInstance<IDbContext>();
System.Threading.Tasks.Task.Run(() =>
{
// some action using dbcontext
});
}
}
In above example of SampleEngine it uses to get IDbContext from Ninject DI. However other engines could use another services regiestred in DI.
All the dependencies are registered as InCallScope()
Basically its like mostly all engine its about fire and forget the given method using Task.Run().
What I did is changed Execute method to return the Task and after this task ran to completion I used to Dispose() this task. This did not bring any value.
I did some investigations and I saw that the problem is inside Ninject.Activation.Cache. I can do the manual cache clean which helps but I know the problem is somewhere in the code but I cannot find it.
Since every dependency is registered as InCallScope() they should be disposed after each task begin to the end. I dont see anything holding reference to these objects because every engine is stateless .
I used ANTS to see the some information and this just keeps growing each minute:
And this points to the Ninject caching as below:
Looks like the DbContext is not disposed and still exist in Ninject cache. Is it a problem of alot of tasks in the system or I do anything wrong ?
Thanks in advance
Cheers!
The most simple approach seems to be embedding the using in your task. But it is a blind shot, as it seems your code is simplified. You don't use the task parameter in your method.
public class SampleEngine : BaseEngine
{
public override void Execute(Task task)
{
System.Threading.Tasks.Task.Run(() =>
{
using (var someService = ServiceLocator.Current.GetInstance<IDbContext>())
{
// some action using dbcontext
}
});
}
}
For a more advanced approach, here is an interesting link. It features a InTaskScope binding. It is based on AsyncLocal and custom tasks through extensions of TaskFactory

AutoFac - Initialize heavy-weight singletons on app_start

Our configuration is, MVC5 C# app, using AutoFac.
We have a number of singletons, which if they're initialized with the first request, causes a bad experience for the user, because their initialization takes around 3-4 seconds in total. We're using AutoFac for Dependency injection, I'm wondering if there's any way of making sure the singletons (or these specific ones) are built on App_Start so we don't lose time when the user sends the first request? If not, what's the best way of solving this problem?
The general solution to this type of problem is to hide such heavy weight objects after a proxy implementation. This way you can trigger the initialization process directly at application startup, while the operation runs in the background without requests to be blocked (unless they require the uninitialized data during their request).
In case your code looks like this:
// The abstraction in question
public interface IMyService
{
ServiceData GetData();
}
// The heavy implementation
public class HeavyInitializationService : IMyServic {
public HeavyInitializationService() {
// Load data here
Thread.Sleep(3000);
}
public ServiceData GetData() => ...
}
A proxy can be created as follows:
public class LazyMyServiceProxy : IMyService {
private readonly Lazy<IMyService> lazyService;
public LazyMyServiceProxy(Lazy<IMyService> lazyService) {
this.lazyService = lazyService;
}
public ServiceData GetData() => this.lazyService.Value.GetData();
}
You can use this proxy as follows:
Lazy<IMyService> lazyService = new Lazy<IMyService>(() =>
new HeavyInitializationService());
container.Register<IMyService>(c => new LazyMyServiceProxy(lazyService))
.SingleInstance();
// Trigger the creation of the heavy data on a background thread:
Task.Factory.StartNew(() => {
// Triggers the creation of HeavyInitializationService on background thread.
var v = lazyService.Value;
});

Quartz scheduler. Schedule job during asp.net application start

I want to use task scheduler to create thread during application start.
I made it thanks to this and this, but something goes wrong and job is not running, of course is initialized before.
My class which is run before start:
[assembly: WebActivatorEx.PreApplicationStartMethod(
typeof(Application.App_Start.TaskScheduler), "Start")]
namespace Application.App_Start
{
public static class TaskScheduler
{
private static readonly IScheduler scheduler = new StdSchedulerFactory().GetScheduler();
private static void CreateTaskToDeleteTmpFiles(Object sender)
{
scheduler.Start();
//Create job which will be add to thread
IJobDetail job = JobBuilder.Create<DeleteTmpJob>()
.WithIdentity("ClearTmpFiles")
.StoreDurably()
.Build();
//Create thread which run the job after specified conditions
ITrigger trigger = TriggerBuilder.Create()
.WithIdentity("ClearTmpFiles")
.StartAt(DateBuilder.FutureDate(1, IntervalUnit.Second))
.Build();
//Add Job and Trigger to scheduler
scheduler.ScheduleJob(job, trigger);
}
}
}
My job class:
public class DeleteTmpJob : IJob
{
private IDocumentStore documentStore;
private IUploaderCollection uploaderCollection;
public DeleteTmpJob(IDocumentStore _documentStore, IUploaderCollection _uploaderCollection)
{
documentStore = _documentStore;
uploaderCollection = _uploaderCollection;
}
public void Execute(IJobExecutionContext context)
{
documentStore.ClearTmpDirectory();
}
}
Job is not running
Anyone can help?
Have you tried using an empty constructor for your job?
"Each (and every) time the scheduler executes the job, it creates a new instance of the class before calling its Execute(..) method. One of the ramifications of this behavior is the fact that jobs must have a no-arguement constructor."
You may need to implement your own JobFactory to allow you to use DI. How you implement it depends on which library you are using.
"When a trigger fires, the JobDetail (instance definition) it is associated to is loaded, and the job class it refers to is instantiated via the JobFactory configured on the Scheduler.The default JobFactory simply calls the default constructor of the job class using Activator.CreateInstance, then attempts to call setter properties on the class that match the names of keys within the JobDataMap. You may want to create your own implementation of JobFactory to accomplish things such as having your application's IoC or DI container produce/initialize the job instance."
source: see here
I had the same problem, when I deleted constructor job worked. First try to call base constructor, if it is still not working try to delete constructor.

Categories