I have a multi-tenant ASP.NET application, and our database is set up with soft deletes. Initially, we handled the restriction of data directly at the query level, e.g:
var foos = context.Foos.Where(foo => !foo.Deleted && foo.TenantId = currentTenantId).ToList();
As you can imagine, this bloats all of the queries in our data access layer, and makes the API very vulnerable if one forgets to add the correct filter conditions. We have decided to apply global filtering to the context with Z.EntityFramework.Plus.EF6:
public class FooDataContextFactory
{
public FooDataContext CreateContext()
{
var context = new FooDataContext();
context.Filter<Foo>(collection => collection.Where(foo=> !foo.Deleted));
var principal = Thread.CurrentPrincipal as ClaimsPrincipal;
if (principal.HasClaim(claim => claim.Type == "TenantId"))
{
var currentTenantId = int.Parse(principal.FindFirst("TenantId").Value);
context.Filter<Foo>(collection => collection.Where(foo => foo.TenantId == currentTenantId));
}
return context;
}
}
This works perfectly for a single user. However, when you switch tenant, we have issues with the filter expression being saved in the query plan cache. This is a known issue with Entity Framework Plus, and since it doesn't appear to be resolved, I need to find a workaround.
The most immediate solution I can think of is to associate the lifetime of the query plan cache to the current session, and when the user logs out or switches tenant, the cache is destroyed. Is this possible, and if so, how can I achieve this?
I had this exact same problem and tried to work with Z.EntityFramework.Plus.EF6 with the same issues. I found that the zzzprojects team also has EntityFramework.DynamicFilters which works much better for this purpose. The query that is cached is parameterized and the value is injected at runtime using the selector function you provide.
using System.Data.Entity;
using EntityFramework.DynamicFilters;
public class Program
{
public class CustomContext : DbContext
{
private int _tenantId;
public int GetTenantId()
{
return _tenantId;
}
// Call this function to set the tenant once authentication is complete.
// Alternatively, you could pass tenantId in when constructing CustomContext if you already know it
// or pass in a function that returns the tenant to the constructor and call it here.
public void SetTenantId(int tenantId)
{
_tenantId = tenantId;
}
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
// Filter applies to any model that implements ITenantRestrictedObject
modelBuilder.Filter(
"TenantFilter",
(ITenantRestrictedObject t, int tenantId) => t.TenantId == tenantId,
(CustomContext ctx) => ctx.GetTenantId(), // Might could replace this with a property accessor... I haven't tried it
opt => opt.ApplyToChildProperties(false)
);
}
}
public interface ITenantRestrictedObject
{
int TenantId { get; }
}
}
Related
I'm currently creating a Blazor Server application that uses Azure AD for Authentication. The authentication works perfectly but I want to set up some AuthorisedView's within the application.
I've created a custom Authorization Handler whereby I take the user's email and find what user group they belong to within my own sql server. For the database calls I'm using Dapper with some table models. When I make a database call within the HandleRequirementAsync function it returns a NullReference exception. I cannot see where there could be an error within the code, am I missing something obvious?
The users list should return the user in the database and then groups list should return the group that user is assigned to based on an ID. Both of these calls work perfectly throughout the rest of the application, it only causes errors within this section below.
GroupHandler.cs
public class GroupHandler : AuthorizationHandler<GroupRequirement>
{
public IUserData _dbUser;
public IGroupData _dbGroup;
protected async override Task<Task> HandleRequirement(AuthorizationHandlerContect context, GroupRequirement requirement)
{
var emailAddress = context.User.Identity.Name;
List<UserModel> users = await _dbUser.GetUserByEmail(emailAddress);
List<GroupModel> groups = await _dbGroup.GetGroupByID(users[0].Group_ID.ToString());
if(groups[0].Group.Contains(requirement.Group))
{
context.Succeed(requirement);
}
return Task.CompletedTask;
}
}
GroupRequirement.cs
public class GroupRequirement : IAuthorizationRequirement
{
public string Group { get; }
public GroupRequirement(string group)
{
Group = group;
}
}
Startup.cs
services.AddAuthorization(config =>
{
config.AddPolicy("IsAdmin", policy =>
policy.Requirements.Add(new GroupRequirement("Admin")));
});
Error
NullReferenceException: Object reference not set an instance of an object
GroupHandler.HandleRequirementAsync(AuthorizationHandlerContext context, GroupRequirement requirement) in GroupHandler.cs, line 30
List<UserModel> users = await _dbUser.GetUserByEmail(emailAddress);
The problem lies inside these lines, as you don't initialize your objects at all. So, they are null by default. Please, initialize this properties.
public IUserData _dbUser;
public IGroupData _dbGroup;
It looks like you never assign the _dbGroup Field in your GroupHandler class.
Depending on your Setup you may be able to inject a IUserData via a constructor.
I'm working on an Asp.Net Core project targeted .Net 5 with Microsoft Identity and Entity framework core (Code first approache).
In my project some entities will inherit from IAuditProperties interface.
IAuditProperties:
This interface used to read/write some audition info from/in any Entity that implement it.
string CreatedBy { get; set; }
DateTime CreatedOn { get; set; }
bool IsEdited { get; set; }
string LastEditor { get; set; }
DateTime LastEditDate { get; set; }
In my project I wrote some extension methods that will write some auditon infos, all those extensions for any Entity that implemented the IAuditProperties interface.
WriteCreationAudit extension method as example
/// <summary>
/// Write audit properties for an <see cref="IAuditProperties"/> for the first creation
/// </summary>
/// <param name="obj"><see cref="IAuditProperties"/> object to write in</param>
/// <param name="appUser">Current user</param>
public static void WriteCreationAudit( this IAuditProperties obj,AppUser appUser)
{
obj.CreatedBy = appUser.FullName;
obj.CreatedOn = DateTime.Now.InMorocco();
obj.IsEdited = false;
}
What is exactly the core issue?
As you notice that the extention method WriteCreationAudit is recieves a appUser parameter, this parameter's type (AppUser) inherit from IdentityUser.
So, the exact issue is How can I create object from AppUser without pass it as parameter from the Controller ?
How I handle this issue at this time?
At this time I'm depending on Controllers and DI to get AppUser object and pass it to WriteCreationAudit method, and I don't love this technique.
So please, How can I achieve my goal about creating new object from AppUser from the extension method ? or if I can't achieve it is there any other good way ?
Massive thanks in advance.
Depend on the circumstance, I would suggest 2 approaching ways, then take whichever that suit your case most... or even better, take the idea and implement it your way.
Simple data was required
As your purposed, I saw every thing was required just a FullName and might be in the future userId. So, why just not simply put them somewhere in Jwt or even cookie depend on your authentication mechanism ? They're not such as ultra-secure information to guard. We can easily saw them here, even Jwt was designed to hold that kind of information. So, just inject IHttpContextAccessor into DbContext or repository if we make use of Repository pattern, take out User Info, then tweak a bit on the SaveChanges things.
Data required to process was some kiind complex or need to be secured.
Make something like BaseInfoRequest object that contain all the infomations we need, set them on some upper middleware and store in cache, with absolute expiration that equivalent to request timeout, the key should be HttpContext.Session.Id + "some constants string" that represent request infoObject. Then take them out from the cache wherever we need.
Just a small note: If we doesn't expose the UserName for example, but userId only, which mean foreach request we need to take UserName from somewhere. That's not a good idea in production scenarios. Take some consider about them to balance things out.
What's wrong with DI inject to controller then pass the param to the extension method?
I just recalled a while back Microsoft said don't inject SignInManager and UserManager in razor component (also not razor page/the razor component with #page). Instead, extend UserClaimsPrincipalFactory to add claims like:
public class AdditionalUserClaimsPrincipalFactory
: UserClaimsPrincipalFactory<AppUser, IdentityRole>
{
public AdditionalUserClaimsPrincipalFactory(
UserManager<AppUser> userManager,
RoleManager<IdentityRole> roleManager,
IOptions<IdentityOptions> optionsAccessor)
: base(userManager, roleManager, optionsAccessor)
{ }
public async override Task<ClaimsPrincipal> CreateAsync(AppUser user)
{
var principal = await base.CreateAsync(user);
var identity = (ClaimsIdentity)principal.Identity;
var claims = new List<Claim>();
claims.Add(new Claim("FullName", user.FullName?? ""));
identity.AddClaims(claims);
return principal;
}
}
I agree with #Gordon Khanh Ng. and this is just implementation difference
This is a very common behaviour and there are many ways to achieve this. Here is how you can do this. This is probably the easiest way
Override your SaveChanges()/ SaveChangesAsync() method in DbContext class. Also inject the IHttpContextAccessor in the constructor.
Then use this code inside your DbContext class.
The GetCurrentUserId() method may differ depending on your Identity implementation.
private string GetCurrentUserId()
{
var httpContext = _httpContextAccessor?.HttpContext ?? null;
if (httpContext.HasValue() && httpContext.User.HasValue())
{
var authenticatedUsername = httpContext.User.Claims.Where(c => c.Type == "sub")
.Select(c => c.Value).SingleOrDefault();
return authenticatedUsername;
}
return null;
}
public override Task<int> SaveChangesAsync(CancellationToken cancellationToken = new CancellationToken())
{
ChangeTracker.DetectChanges();
var entries = ChangeTracker.Entries()
.Where(e => e.State != EntityState.Detached && e.State != EntityState.Unchanged);
foreach (var entry in entries)
{
if (entry.Entity is IAuditProperties trackable)
{
var now = DateTime.UtcNow;
var user = GetCurrentUserId();
switch (entry.State)
{
case EntityState.Added:
trackable.CreatedAt = now;
trackable.CreatedBy = user;
trackable.IsEdited = false;
break;
}
}
}
return base.SaveChangesAsync(cancellationToken);
}
Documentation Says : The model for that context is cached and is for all further instances of the context in the app domain. This caching can be disabled by setting the ModelCaching property on the given ModelBuidler
But i can't find way to do it. I have to disable caching because I am adding Model at runtime and loading all the models from assembly and creating database.
I found this link which says one way of achieving this is using DBModelBuilding - adding model mannually to context but it is for Entity Framework, Not helped for EF Core.
Entity Framework 6. Disable ModelCaching
I hope some one has solution for this.
Thank you
Once a model is successfully created, EF Core will cache it forever, unless you implement a cache manager that is able to tell whether a model is equivalent to another, and therefore it can be cached or not.
The entry point is to implement the cache manager:
internal sealed class MyModelCacheKeyFactory : IModelCacheKeyFactory
{
public object Create([NotNull] DbContext context)
{
return GetKey(context);
}
}
The GetKey method which you have to write must return an object that will be used as key. This method should inspect the provided context and return the same key when the models are the same, and something different when they are not. More on IModelCacheKeyFactory Interface.
I understand, this might not be clear (and it wasn't for me either), so I write a full example of what I have in production.
A Working Example
My target is to use the same context for different schemas. What we need to do is
create a new context option
implement the logic in the context
create the cache key factory
make the extension method to specify the schema
call the extension method on the db context
1. Create a new context option
Here there is a boilerplate containing _schemaName only. The boilerplate is necessary as the extension option is immutable by design and we need to preserve the contract.
internal class MySchemaOptionsExtension : IDbContextOptionsExtension
{
private DbContextOptionsExtensionInfo? _info;
private string _schemaName = string.Empty;
public MySchemaOptionsExtension()
{
}
protected MySchemaOptionsExtension(MySchemaOptionsExtension copyFrom)
{
_schemaName = copyFrom._schemaName;
}
public virtual DbContextOptionsExtensionInfo Info => _info ??= new ExtensionInfo(this);
public virtual string SchemaName => _schemaName;
public virtual void ApplyServices(IServiceCollection services)
{
// not used
}
public virtual void Validate(IDbContextOptions options)
{
// always ok
}
public virtual MySchemaOptionsExtension WithSchemaName(string schemaName)
{
var clone = Clone();
clone._schemaName = schemaName;
return clone;
}
protected virtual MySchemaOptionsExtension Clone() => new(this);
private sealed class ExtensionInfo : DbContextOptionsExtensionInfo
{
private const long ExtensionHashCode = 741; // this value has chosen has nobody else is using it
private string? _logFragment;
public ExtensionInfo(IDbContextOptionsExtension extension) : base(extension)
{
}
private new MySchemaOptionsExtension Extension => (MySchemaOptionsExtension)base.Extension;
public override bool IsDatabaseProvider => false;
public override string LogFragment => _logFragment ??= $"using schema {Extension.SchemaName}";
public override long GetServiceProviderHashCode() => ExtensionHashCode;
public override void PopulateDebugInfo([NotNull] IDictionary<string, string> debugInfo)
{
debugInfo["MySchema:" + nameof(DbContextOptionsBuilderExtensions.UseMySchema)] = (ExtensionHashCode).ToString(CultureInfo.InvariantCulture);
}
}
}
2. The logic in the context
Here we force the schema to all the real entities. The schema is obtained by the option attached to the context
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
var options = this.GetService<IDbContextOptions>().FindExtension<MySchemaOptionsExtension>();
if (options == null)
{
// nothing to apply, this is a supported scenario.
return;
}
var schema = options.SchemaName;
foreach (var item in modelBuilder.Model.GetEntityTypes())
{
if (item.ClrType != null)
item.SetSchema(schema);
}
}
3. Create the cache key factory
Here we need to the create the cache factory which will tel EF Core that it can cache all the models on the same context, i.e. all the contexts with the same schema will use the same model:
internal sealed class MyModelCacheKeyFactory : IModelCacheKeyFactory
{
public object Create([NotNull] DbContext context)
{
const string defaultSchema = "dbo";
var extension = context.GetService<IDbContextOptions>().FindExtension<MySchemaOptionsExtension>();
string schema;
if (extension == null)
schema = defaultSchema;
else
schema = extension.SchemaName;
if (string.IsNullOrWhiteSpace(schema))
schema = defaultSchema;
// ** this is the magic **
return (context.GetType(), schema.ToUpperInvariant());
}
}
The magic is here is in this line
return (context.GetType(), schema.ToUpperInvariant());
that we return a tuple with the type of our context and the schema. The hash of a tuple combines the hash of each entry, therefore the type and schema name are the logical discriminator here. When they match, the model is reused; when they do not, a new model is created and then cached.
4. Make the extension method
The extension method simply hides the addition of the option and the replacement of the cache service.
public static DbContextOptionsBuilder UseMySchema(this DbContextOptionsBuilder optionsBuilder, string schemaName)
{
if (optionsBuilder == null)
throw new ArgumentNullException(nameof(optionsBuilder));
if (string.IsNullOrEmpty(schemaName))
throw new ArgumentNullException(nameof(schemaName));
var extension = optionsBuilder.Options.FindExtension<MySchemaOptionsExtension>() ?? new MySchemaOptionsExtension();
extension = extension.WithSchemaName(schemaName);
((IDbContextOptionsBuilderInfrastructure)optionsBuilder).AddOrUpdateExtension(extension);
optionsBuilder.ReplaceService<IModelCacheKeyFactory, MyModelCacheKeyFactory>();
return optionsBuilder;
}
In particular, the following line applies our cache manager:
optionsBuilder.ReplaceService<IModelCacheKeyFactory, MyModelCacheKeyFactory>();
5. Call the extension method
You can manually create the context as follows:
var options = new DbContextOptionsBuilder<DataContext>();
options.UseMySchema("schema1")
options.UseSqlServer("connection string omitted");
var context = new DataContext(options.Options)
Alternatively, you can use IDbContextFactory with dependency injection. More on IDbContextFactory Interface.
You'll need to change the cache key to properly represent the model that you are building/make it distinct.
Implement IDbModelCacheKeyProvider Interface on derived DbContext. Check this out
https://learn.microsoft.com/en-us/dotnet/api/system.data.entity.infrastructure.idbmodelcachekeyprovider?redirectedfrom=MSDN&view=entity-framework-6.2.0
Build the model outside the DbContext and then provide it in the options.
Premise
The documented method to apply resource-based authorization in ASP.Net Core is to register an AuthorizationHandler, define each OperationAuthorizationRequirement, then check access to resources using the AuthorizeAsync() method of an injected IAuthorizationHandler. (Reference docs)
This is all well and good for checking operations against individual records, but my question is how best to authorize against many resources at once (e.g. checking read permission against a list of records for an index page)?
Example
Let's say we have a list of orders, and we want to provide users with a list of the ones they have created. To do this with the practice defined by Microsoft's docs, we would first create some static OperationAuthorizationRequirement objects:
public static class CrudOperations
{
public static OperationAuthorizationRequirement Create =
new OperationAuthorizationRequirement { Name = nameof(Create) };
public static OperationAuthorizationRequirement Read =
new OperationAuthorizationRequirement { Name = nameof(Read) };
public static OperationAuthorizationRequirement Update =
new OperationAuthorizationRequirement { Name = nameof(Update) };
public static OperationAuthorizationRequirement Delete =
new OperationAuthorizationRequirement { Name = nameof(Delete) };
}
..and then create our AuthorizationHandler:
public class OrderCreatorAuthorizationHandler :
AuthorizationHandler<OperationAuthorizationRequirement, Order>
{
protected override Task HandleRequirementAsync(
AuthorizationHandlerContext context,
OperationAuthorizationRequirement requirement,
InspectionManagementUser resource)
{
if (context.User == null || resource == null)
{
return Task.CompletedTask;
}
var currentUserId = User.FindFirstValue(ClaimTypes.NameIdentifier);
if (resource.CreatedById == currentUserId
&& requirement.Name == CrudOperations.Read.Name) {
context.Succeed(requirement);
}
return Task.CompletedTask;
}
}
This is registered as a service in Startup.cs, and is ready to go. In our view logic, we can use our new handler to get a filtered list of orders as such:
//_context is an injected instance of the application's DatabaseContext
//_authorizationService is an injected instance of IAuthorizationService
var allOrders = await _context.Orders.ToListAsync();
var filteredOrders = allOrders
.Where(o => _authorizationService.AuthorizeAsync(User, o, CrudOperations.Read).Result.Succeeded);
This will work just fine, but to me seems extremely computationally expensive as each record is checked separately in memory. This would increase even further as the logic for the authorization handler got more complex (for example, if it involved a database call).
It would presumably be far more efficient to have the database engine filter the list for us as follows:
var currentUserId = User.FindFirstValue(ClaimTypes.NameIdentifier);
var filteredOrders = await _context.Orders
.Where(o => o.CreatedById == currentUserId)
.ToListAsync();
This will execute faster, but we've now bypassed our authorization logic completely. If we later decide to change the restrictions in our AuthorizationHandler we must also remember to change it here and anywhere else we use this method. If you ask me that rather seems to defeat the purpose of separating this authorization code out in the first place.
Is there a neat solution to this problem that I'm missing? Any advice or guidance on best practice would be much appreciated.
I have the following code:
public void someMethod(){
...
var accounts = myRepo.GetAccounts(accountId)?.ToList();
...
foreach (var account in accounts)
{
account.Status="INACTIVE";
var updatedAccount = myRepo.AddOrUpdateAccounts(account);
}
}
public Account AddOrUpdateAccounts(Account account){
//I want to compare account in the Db and what is passed in. So get the account from DB
var accountFromDb = myRepo.GetAccounts(account.Id); //this doesn't return whats in the database.
//here accountFromDb.Status is returned as INACTIVE, but in the database the column value is ACTIVE
...
...
}
public IEnumerable<Account> GetAccounts(int id){
return id <= 0 ? null : m_Context.Accounts.Where(x => x.Id == id);
}
Here, inside someMethod() I am calling GetAccounts() that returns data from the Accounts table.
Then I am changing the Status of the account, and calling AddOrUpdateAccounts().
Inside AddOrUpdateAccounts(), I want to compare the account that was passed in and whats in the database. When I call GetAccounts(), it returned a record with STATUS="INACTIVE". I haven't done SaveChanges(). Why didn't GetAccounts() returned the data from the database? In the Db the status is still "ACTIVE"
The repository method should return IQueryable<Account> rather than IEnumerable<Account> as this will allow the the consumer to continue to refine any criteria or govern how the account(s) should be consumed prior to any query executing against the database:
I would consider:
public IQueryable<Account> GetAccountsById(int id){
return m_Context.Accounts.Where(x => x.Id == id);
}
Don't return #null, just the query. The consumer can decide what to do if the data is not available.
From there the calling code looks like:
var accounts = myRepo.GetAccounts(accountId).ToList();
foreach (var account in accounts)
{
account.Status="INACTIVE";
}
Your addOrUpdate wouldn't work:
public Account AddOrUpdateAccounts(Account account){
...
var account = myRepo.GetAccounts(account.Id); //this doesn't return whats in the database.
You pass in the Account as "account" then try declaring a local variable called "account". If you remove the var keyword you would load the DbContext's record over top your modified account and your changes would be lost. Loading the account into another variable isn't necessary as long as the account is still associated with the DbContext.
Edit: After changing the var account = ... statement to look like:
public Account AddOrUpdateAccounts(Account account){
...
var accountToUpdate = myRepo.GetAccounts(account.Id); //this doesn't return whats
accountToUpdate will show the modified status rather than what is in the database because that DbContext is still tracking the reference to the entity that you modified. (account) For instance if I do this:
var account1st = context.Accounts.Single(x => x.AccountId == 1);
var account2nd = context.Accounts.Single(x => x.AccountId == 1);
Console.WriteLine(account1st.Status); // I get "ACTIVE"
Console.WriteLine(account2nd.Status); // I get "ACTIVE"
account1st.Status = "INACTIVE";
Console.WriteLine(account2nd.Status); // I get "INACTIVE"
Both references point to the same instance. It doesn't matter when I attempt to read the Account the 2nd time, as long as it's coming from the same DbContext and the context is tracking instances. If you read the row via a different DbContext, or use AsNoTracking() with all of your reads then the account can be read fresh from the database. You can reload an entity, but if those variables are pointing at the same reference it will overwrite your changes and set the entity back to Unmodified. This can be a little confusing when watching an SQL profiler output because in some cases you will see EF run a SELECT query for an entity, but the entity returned has different, modified values than what is in the database. Even when loading from the tracking cache, EF can still execute queries against the DB in some cases, but it returns the tracked entity reference.
/Edit
When it comes to saving the changes, it really just boils down to calling the SaveChanges on the DbContext that the account is associated. The "tricky" part is scoping the DbContext so that this can be done. The recommended pattern for this is the Unit of Work. There are a few different ones out there, and the one I recommend for EF is Mehdime's DbContextScope, however you can implement simpler ones that may be easier to understand and follow. Essentially a unit of work encapsulates the DbContext so that you can define a scope that repositories can access the same DbContext, then commit those changes at the end of the work.
At the most basic level:
public interface IUnitOfWork<TDbContext> : IDisposable where TDbContext : DbContext
{
TDbContext Context { get; }
int SaveChanges();
}
public class UnitOfWork : IUnitOfWork<YourDbContext>
{
private YourDbContext _context = null;
TDbContext IUnitOfWork<YourDbContext>.Context
{
get { return _context ?? (_context = new YourDbContext("YourConnectionString"); }
}
int IUnitOfWork<YourDbContext>.SaveChanges()
{
if(_context == null)
return 0;
return _context.SaveChanges();
}
public void Dispose()
{
try
{
if (_context != null)
_context.Dispose();
}
catch (ObjectDisposedException)
{ }
}
}
With this class available, and using dependency injection via an IoC container (Autofac, Unity, or MVC Core) you register the unit of work as Instance per Request so that when the controller and repository classes request one in their constructor, they receive the same instance.
Controller / Service:
private readonly IUnitOfWork<YourDbContext> _unitOfWork = null;
private readonly IYourRepository _repository = null;
public YourService(IUnitOfWork<YourDbContext> unitOfWork, IYourRepository repository)
{
_unitOfWork = unitOfWork ?? throw new ArgumentNullException("unitOfWork");
_repository = repository ?? throw new ArgumentNullException("repository");
}
Repository
private readonly IUnitOfWork<YourDbContext> _unitOfWork = null;
public YourService(IUnitOfWork<YourDbContext> unitOfWork)
{
_unitOfWork = unitOfWork ?? throw new ArgumentNullException("unitOfWork");
}
private YourDbContext Context { get { return _unitOfWork.Context; } }
Big Disclaimer: This is a very crude initial implementation to explain roughly how a Unit of Work can operate, it is no way production suitable code. It has limitations, specifically around disposing the DbContext but should serve as a demonstration. Definitely look to implement a library that's already out there and addresses these concerns. These implementations properly manage the DbContext disposal and will manage a scope beyond the context, like a TransactionScope so that their SaveChanges is required even if the unitOfWork.Context.SaveChanges() is called.
With a unit of work available to the Controller/Service and Repository, the code to use the repository and update your changes becomes:
var accounts = myRepo.GetAccountsById(accountId).ToList();
foreach (var account in accounts)
{
account.Status="INACTIVE";
}
UnitOfWork.SaveChanges();
With a proper unit of work it will look more like:
using (var unitOfWork = UnitOfWorkFactory.Create())
{
var accounts = myRepo.GetAccountsById(accountId).ToList(); // Where myRepo can resolve the unit of work via locator.
foreach (var account in accounts)
{
account.Status="INACTIVE";
}
unitOfWork.SaveChanges();
}
This way if you were to call different repos to fetch data, perform a number of different updates, the changes would be committed all in one call at the end and rolled back if there was a problem with any of the data.