How can I prevent synchronous database access with Entity Framework Core? e.g. how can I make sure we are calling ToListAsync() instead of ToList()?
I've been trying to get an exception to throw when unit testing a method which calls the synchronous API. Are there configuration options or some methods we could override to make this work?
I have tried using a DbCommandInterceptor, but none of the interceptor methods are called when testing with an in-memory database.
The solution is to use a command interceptor.
public class AsyncOnlyInterceptor : DbCommandInterceptor
{
public bool AllowSynchronous { get; set; } = false;
public override InterceptionResult<int> NonQueryExecuting(DbCommand command, CommandEventData eventData, InterceptionResult<int> result)
{
ThrowIfNotAllowed();
return result;
}
public override InterceptionResult<DbDataReader> ReaderExecuting(DbCommand command, CommandEventData eventData, InterceptionResult<DbDataReader> result)
{
ThrowIfNotAllowed();
return result;
}
public override InterceptionResult<object> ScalarExecuting(DbCommand command, CommandEventData eventData, InterceptionResult<object> result)
{
ThrowIfNotAllowed();
return result;
}
private void ThrowIfNotAllowed()
{
if (!AllowSynchronous)
{
throw new NotAsyncException("Synchronous database access is not allowed. Use the asynchronous EF Core API instead.");
}
}
}
If you're wanting to write some tests for this, you can use a Sqlite in-memory database. The Database.EnsureCreatedAsync() method does use synchronous database access, so you will need an option to enable this for specific cases.
public partial class MyDbContext : DbContext
{
private readonly AsyncOnlyInterceptor _asyncOnlyInterceptor;
public MyDbContext(IOptionsBuilder optionsBuilder)
: base(optionsBuilder.BuildOptions())
{
_asyncOnlyInterceptor = new AsyncOnlyInterceptor();
}
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.AddInterceptors(_asyncOnlyInterceptor);
base.OnConfiguring(optionsBuilder);
}
public bool AllowSynchronous
{
get => _asyncOnlyInterceptor.AllowSynchronous;
set => _asyncOnlyInterceptor.AllowSynchronous = value;
}
}
Here are some helpers for testing. Ensure you aren't using sequences (modelBuilder.HasSequence) because this is not supported by Sqlite.
public class InMemoryOptionsBuilder<TContext> : IOptionsBuilder
where TContext : DbContext
{
public DbContextOptions BuildOptions()
{
var optionsBuilder = new DbContextOptionsBuilder<TContext>();
var connection = new SqliteConnection("Filename=:memory:");
connection.Open();
optionsBuilder = optionsBuilder.UseSqlite(connection);
return optionsBuilder.Options;
}
}
public class Helpers
{
public static async Task<MyDbContext> BuildTestDbContextAsync()
{
var optionBuilder = new InMemoryOptionsBuilder<MyDbContext>();
var context = new MyDbContext(optionBuilder)
{
AllowSynchronous = true
};
await context.Database.EnsureCreatedAsync();
context.AllowSynchronous = false;
return context;
}
}
How can I prevent synchronous database access with Entity Framework Core?
You can not. Period. THere is also no reason for this ever. You basically assume programmers using your API either are idiots or malicious - why else would you try to stop them from doing something that is legal in their language?
I have tried using a DbCommandInterceptor, but none of the interceptor methods are
called when testing with an in-memory database
There are a TON of problems with the in memory database. I would generally suggest not to use it - like at all. Unless you prefer a "works possibly" and "never actually use advanced features of the database at all". It is a dead end - we never do unit testing on API like this, all our unit tests actually are integration tests and test end to end (vs a real database).
In memory has serious no guarantee to work in anything non trivial at all. Details may be wrong - and you end up writing fake tests and looking for issues when the issue is that the behavior of the in memory database just is a little different than the real database. And let's not get into what you can do with the real database that in memory has no clue how to do to start with (and migrations also do not cover). Partial and filtered indices, indexed views are tremendous performance tools that can not be properly shown. And not get into detail differences for things like string comparisons.
But the general conclusion is that it is not your job to stop users from calling valid methods on EfCore etc. and you are not lucky to actually do that - not a scenario the team will ever support. There are REALLY good reasons at time to use synchronous calls - in SOME scenarios it seems the async handling is breaking down. I have some interceptors (in the http stack) where async calls just do not work. Like never return. Nothing I ever tried worked there - so I do sync calls when I have to (thank heaven I have a ton of caching in there).
You can prevent it at compile-time to some degree by using the Microsoft.CodeAnalysis.BannedApiAnalyzers NuGet package. More information about it here.
Methods that end up doing synchronous queries can then be added to BannedSymbols.txt, and you will get a compiler warning when attempting to use them. For example adding the following line to BannedSymbols.txt gives a warning when using First() on an IQueryable<T>:
M:System.Linq.Queryable.First`1(System.Linq.IQueryable{``0});Use async overload
These warnings can also be escalated to become compiler errors by treating warnings as errors as explained here:
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/compiler-options/errors-warnings
Unfortunately not all synchronous methods can be covered by this approach. For example since ToList() is an extension on IEnumerable<T> (and not on IQueryable<T>), banning it will not allow any use of ToList() in the same project.
I can't really find a good Google answer for you. So my suggestion in the meantime is that you start doing peer-review, aka Code Reviews and any time you find a .Tolist(), you change it to await .ToListAsync().
It's not the most high tech solution, but it does keep everyone honest, but it also allows others to become familiar with your work should they ever need to maintain it while you're booked off sick.
Related
First of all, I couldn't make the title more explanatory, I will try to lay out the problem then provide my solution for it
I'm implementing a backend in asp core for our game, we have few requests that are somewhat large, like requesting the items we provide in the store, every user starts the game loads the store info which makes a database trip to pull the entire store info, which RARELY change -less than once a month-, so we are making thousands of database trip that aren't needed.
on top of that we return timestamps for when was the last time an item image has changed, the images are stored in a blob which makes me query the blob for change date, which makes the request way costlier
so to solve all of this, I implemented a small class to cache the request until we need to update it,for this request and some others, but I'm not sure if I'm looking at this correctly
here is the base abstract class:
public abstract class CachedModel<T>
{
protected T Model { get; set; }
private readonly SemaphoreSlim semaphore = new SemaphoreSlim(1,1);
protected abstract Task ThreadSafeUpdateAsync();
protected abstract bool NeedsUpdate();
public async Task<T> GetModel()
{
if (NeedsUpdate())
{
try
{
await semaphore.WaitAsync();
if(NeedsUpdate()) // not sure if this is needed, can other threads enter here after the first one already updated the object?
await ThreadSafeUpdateAsync();
}
finally
{
semaphore.Release();
}
}
return Model;
}
}
and then I implement this class per request like this:
public class CachedStoreInfo : CachedModel<DesiredModel>
{
protected override async Task ThreadSafeUpdateAsync()
{
// make the trip to db and Blob service
Model = some result
}
protected override bool NeedsUpdate()
{
return someLogicToDecideIfNeedsUpdate;
}
}
finally, in the asp controller all what I need to do is this:
[HttpGet]
public async Task<DesiredModel> GetStoreInfo()
{
return await cachedStoreInfo.GetModel();
}
Is this a proper implementation ? and is this even necessary or there is a smarter way to achieve this? getting the time stamps from the blob was the main reason I though about caching the result
Your implementation looks correct. Of course the instance of CachedStoreInfo should be a singleton in a required scope (as I understand in your case it should be a singleton in scope of application).
can other threads enter here after the first one already updated the object?
As Kevin Gosse noted other threads can enter here. Your second check for NeedsUpdate() is a part of Double-checked locking pattern. And it might be a good optimization.
and is this even necessary or there is a smarter way to achieve this?
As for me your implementation is minimalist and smart enough
I have writing an ASP.NET Core web application that needs all the data from some tables of my database to later organize it into readable format for some analysis.
My problem is that this data is potentially massive, and so in order to increase performance i decided to get this data in parallel and not one table at a time.
My issue is that i dont quite understand how to achieve this with the inherit dependency injection as in order to be able to do parallel work, i need to instantiate the DbContext for each of these parallel work.
The below code produces this exception:
---> (Inner Exception #6) System.ObjectDisposedException: Cannot access a disposed object. A common cause of this error is disposing a context that was resolved from dependency injection and then later trying to use the same context instance elsewhere in your application. This may occur if you are calling Dispose() on the context, or wrapping the context in a using statement. If you are using dependency injection, you should let the dependency injection container take care of disposing context instances.
Object name: 'MyDbContext'.
at Microsoft.EntityFrameworkCore.DbContext.CheckDisposed()
at Microsoft.EntityFrameworkCore.DbContext.get_InternalServiceProvider()
at Microsoft.EntityFrameworkCore.DbContext.get_ChangeTracker()
ASP.NET Core project:
Startup.cs:
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
services.AddDistributedMemoryCache();
services.AddDbContext<AmsdbaContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("ConnectionString"))
.UseQueryTrackingBehavior(QueryTrackingBehavior.NoTracking));
services.AddSession(options =>
{
options.Cookie.HttpOnly = true;
});
}
public void Configure(IApplicationBuilder app, ILoggerFactory loggerFactory)
{
if (HostingEnvironment.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Error");
app.UseHsts();
}
loggerFactory.AddLog4Net();
app.UseStaticFiles();
app.UseCookiePolicy();
app.UseSession();
app.UseMvc();
}
Controller's action method:
[HttpPost("[controller]/[action]")]
public ActionResult GenerateAllData()
{
List<CardData> cardsData;
using (var scope = _serviceScopeFactory.CreateScope())
using (var dataFetcher = new DataFetcher(scope))
{
cardsData = dataFetcher.GetAllData(); // Calling the method that invokes the method 'InitializeData' from below code
}
return something...;
}
.NET Core Library project:
DataFetcher's InitializeData - to get all table records according to some irrelevant parameters:
private void InitializeData()
{
var tbl1task = GetTbl1FromDatabaseTask();
var tbl2task = GetTbl2FromDatabaseTask();
var tbl3task = GetTbl3FromDatabaseTask();
var tasks = new List<Task>
{
tbl1task,
tbl2task,
tbl3task,
};
Task.WaitAll(tasks.ToArray());
Tbl1 = tbl1task.Result;
Tbl2 = tbl2task.Result;
Tbl3 = tbl3task.Result;
}
DataFetcher's sample task:
private async Task<List<SomeData>> GetTbl1FromDatabaseTask()
{
using (var amsdbaContext = _serviceScope.ServiceProvider.GetRequiredService<AmsdbaContext>())
{
amsdbaContext.ChangeTracker.QueryTrackingBehavior = QueryTrackingBehavior.NoTracking;
return await amsdbaContext.StagingRule.Where(x => x.SectionId == _sectionId).ToListAsync();
}
}
I'm not sure you do actually need multiple contexts here. You have have noticed that in the EF Core docs, there's this conspicuous warning:
Warning
EF Core does not support multiple parallel operations being run on the same context instance. You should always wait for an operation to complete before beginning the next operation. This is typically done by using the await keyword on each asynchronous operation.
This is not entirely accurate, or rather, it's simply worded somewhat confusingly. You can actually run parallel queries on a single context instance. The issue comes in with EF's change tracking and object fixup. These types of things don't support multiple operations happening at the same time, as they need to have a stable state to work from when doing their work. However, that really just limits your ability to do certain things. For example, if you were to run parallel saves/select queries, the results could be garbled. You might not get back things that are actually there now or change tracking could get messed up while it's attempt to create the necessary insert/update statements, etc. However, if you're doing non-atomic queries, such as selects on independent tables as you wish to do here, there's no real issue, especially, if you're not planning on doing further operations like edits on the entities you're selecting out, and just planning on returning them to a view or something.
If you truly determine you need separate contexts, your best bet is new up your context with a using. I haven't actually tried this before, but you should be able to inject DbContextOptions<AmsdbaContext> into your class where these operations are happening. It should already be registered in the service collection since it's injected into your context when the service collection instantiates that. If not, you can always just build a new one:
var options = new DbContextOptionsBuilder()
.UseSqlServer(connectionString)
.Build()
.Options;
In either case, then:
List<Tbl1> tbl1data;
List<Tbl2> tbl2data;
List<Tbl3> tbl3data;
using (var tbl1Context = new AmsdbaContext(options))
using (var tbl2Context = new AmsdbaContext(options))
using (var tbl3Context = new AmsdbaContext(options))
{
var tbl1task = tbl1Context.Tbl1.ToListAsync();
var tbl2task = tbl2Context.Tbl2.ToListAsync();
var tbl3task = tbl3Context.Tbl3.ToListAsync();
tbl1data = await tbl1task;
tbl2data = await tbl2task;
tbl3data = await tbl3task;
}
It's better to use await to get the actual result. This way, you don't even need WaitAll/WhenAll/etc. and you're not blocking on the call to Result. Since tasks return hot, or already started, simply postponing calling await until each has been created is enough to buy you parallel processing.
Just be careful with this that you select everything you need within the usings. Now that EF Core supports lazy-loading, if you're using that, an attempt to access a reference or collection property that hasn't been loaded will trigger an ObjectDisposedException, since the context will be gone.
Simple answer is - you do not. You need an alternative way to generate dbcontext instances. The standard approach is to get the same instance on all requests for a DbContext in the same HttpRequest. You can possibly override ServiceLifetime, but that then changes the behavior of ALL requests.
You can register a second DbContext (subclass, interface) with a different service lifetime. Even then you need to handle the creation manually as you need to call it once for every thread.
You manaully create them.
Standard DI simply comes to an end here. It is QUITE lacking, even compared to older MS DI frameworks where you possibly could put up a separate processing class with an attribute to override creation.
We're using ASP.NET Entity Framework Core for querying our MSSQL database in our Web API app. Sometimes when we have big traffic, querying to DB ends with this error:
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
I wonder if our pattern of using DbContext and querying is correct or if I am missing some using/dispose pattern and error is caused by some memory leak (after some research I read then I should not use using because the lifetime is managed by the framework). I am following documentation...
My connectionString:
"myConnection": "Server=xxx;Database=xxx;user id=xxx;password=xxx;Max Pool Size=200;Timeout=200;"
My Startup.cs
public void ConfigureServices(IServiceCollection services)
{
.....
// scoped context
services.AddDbContext<MyDbContext>(
options => options.UseSqlServer(this.Configuration.GetConnectionString("myConnection")));
}
then in controllers I used dbcontext by dependency injection:
public class MyController : Controller
public MyController (MyDbContext context)
{
this.Context = context;
}
public ActionResult Get(int id)
{
// querying
return this.Context.tRealty.Where(x=>x.id == id).FirstOrDefault();
}
Should I use something like:
using (var context = this.Context)
{
return this.Context.tRealty.Where(x => x.id == id).FirstOrDefault();
}
But I think that this is bad pattern when I am using dependency injection of DbContext.
I think problem was caused by storing objects from database context queries to In memory cache. I had one big LINQ query to database context with some other subqueries inside. I called FirstOrDefault() on the end of main query but not inside subqueries. Controller was fine with it, it materialize queries by default.
return this.Context.tRealty.AsNoTracking().Where(
x => x.Id == id && x.RealtyProcess == RealtyProcess.Visible).Select(
s => new
{ .....
// subquery
videos = s.TVideo.Where(video => video.RealtyId == id && video.IsPublicOnYouTube).
Select(video => video.YouTubeId).ToList()), // missing ToList()
.....
}).FirstOrDefault();
And there was problem - subqueries were holding connection to database context when they where storing to In memory cache. When I implemented Redis distributed cache, it was first failing on some strange errors. It helps when I write ToList() or FirstOrDefault() to all my subqueries because distributed cache needs materialized objects.
Now I have all my queries materialized explicitly and I got no max pool size was reached error. So that one must be careful when stored objects from database context queries to In memory cache. It is need to materialize all queries to avoid to holding connection somewhere in memory.
You can set the lifetime of the DbContext in your startup.cs, see if this helps:
services.AddDbContext<MyDbContext>(options => options
.UseSqlServer(connection), ServiceLifetime.Scoped);
Also if your query is a simple read you can remove tracking by using .AsNoTracking().
Another way to improve your throughput is to prevent locks by using a transaction block with IsolationLevel.ReadUncommitted for simple reads.
You can also use the Snapshot isolation level - which is slightly more restrictive - if you do not want dirty reads.
TransactionOptions transactionOptions = new TransactionOptions() { IsolationLevel = IsolationLevel.ReadUncommitted};
using (TransactionScope transactionScope = new TransactionScope(TransactionScopeOption.Required, transactionOptions))
{
// insert magic here
}
Edit : As the author of the question mentioned, the above code is not (yet?) possible in EF Core.
A workaround can be found here using an explicit transaction:
using (var connection = new SqlConnection(connectionString))
{
connection.Open();
using (var transaction = connection.BeginTransaction())
{
// transaction.Commit();
// transaction.Rollback();
}
}
I have not tested this.
Edit 2: Another untested snippet where you can have executed commands to set isolation level:
using (var c1= new SqlConnection(connectionString))
{
c1.Open();
// set isolation level
Exec(c1, "SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;");
Exec(c1, "BEGIN TRANSACTION;");
// do your magic here
}
With Exec:
private static void Exec(SqlConnection c, string s)
{
using (var m = c.CreateCommand())
{
m.CommandText = s;
m.ExecuteNonQuery();
}
}
Edit 3: According to that thread, Transactions will be supported from .NET Core version 1.2 onwards.
#mukundabrt this is tracked by dotnet/corefx#2949. Note that
TransactionScope has already been ported to .NET Core but will only be
available in .NET Core 1.2.
I am adding an alternative answer, in case anyone lands here with a slightly different root cause, as was the case for my .NET Core MVC application.
In my scenario, the application was producing these "timeout expired... max pool size was reached" errors due to mixed use of async/await and Task.Result within the same controller.
I had done this in an attempt to reuse code by calling a certain asynchronous method in my constructor to set a property. Since constructors do not allow asynchronous calls, I was forced to use Task.Result. However, I was using async Task<IActionResult> methods to await database calls within the same controller. We engaged Microsoft Support, and an Engineer helped explain why this happens:
Looks like we are making a blocking call to an Async method inside
[...] constructor.
...
So, basically something is going wrong in the call to above
highlighted async method and because of which all the threads listed
above are blocked.
Looking at the threads which are doing same operation and blocked:
...
85.71% of threads blocked (174 threads)
We should avoid mixing async and blocking code. Mixed async and
blocking code can cause deadlocks, more-complex error handling and
unexpected blocking of context threads.
https://msdn.microsoft.com/en-us/magazine/jj991977.aspx
https://blogs.msdn.microsoft.com/jpsanders/2017/08/28/asp-net-do-not-use-task-result-in-main-context/
Action Plan
Please engage your application team to revisit the application code of above mentioned method to understand what is going
wrong.
Also, I would appreciate if you could update your application logic to
not mix async and blocking code. You could use await Task instead of
Task.Wait or Task.Result.
So in our case, I pulled the Task.Result out of the constructor and moved it into a private async method where we could await it. Then, since I only want it to run the task once per use of the controller, I store the result to that local property, and run the task from within that method only if the property value is null.
In my defense, I expected the compiler would at least throw a warning if mixing async and blocking code is so problematic. However, it seems obvious enough to me, in hindsight!
Hopefully, this helps someone...
I'm hoping to finally get to the very bottom of an ongoing problem with Entity Framework DbContexts. The history of my problem is that sporadically - especially when requests come in in fast succession - my DbContext throws a variety of strange errors, including the following:
System.InvalidOperationException: There is already an open DataReader associated with this Command which must be closed first.
System.InvalidOperationException: Internal connection fatal error.
My MVC code is based around a basic pattern where I have a base controller, which looks like this:
public class BaseController : Controller
{
protected readonly DbContext db = new DbContext();
protected override void Dispose(bool Disposing)
{
db.Dispose();
base.Dispose(disposing);
}
}
All other controllers derive from this base controller, thereby making the DbContext available as necessary to controller actions, none of which are asynchronous. The only exception is my custom authorization, which also creates a DbContext upon access and is called with virtually every controller action (via attribute):
public class MyAuthorizeAttribute : AuthorizeAttribute
{
private DbContext db;
protected override bool IsAuthorized(HttpActionContext actionContext)
{
db = new DbContext();
var user =
db.Security.FirstOrDefault(u =>
u.Id == actionContext.ControllerContext.Request.Headers.First(h =>
h.Key == "Id").Value);
return (user != null);
}
}
I've also experimented with the following to no avail:
Removed all asynchronicity from my controller actions
Removed lazy loading from DbContext and inserted explicit Include statements with each call
Looking through StackOverflow, other people appear to have had similar issues:
Weird race conditions when I send high frequency requests to my datacontext
Random errors occur with per-request DbContext
Neither answers really helped me get to the bottom of the problem, but the OP-answer of the second SO post said ("After further investigation I found out that request processing thread sometimes steals DbContext from other thread"), but I'm not sure how this really applies.
Is there something fundamentally wrong with my design? Wrapping each controller action's DbContext into a using block can't be right, even though this blog says it is - but doesn't that cause other problems, such as returning objects that are no longer attached to a DbContext (and therefore lose change tracking)...?
when requests come in in fast succession
Made me think about a a problem about a year ago so I don't think it related to EF6. However, it took me quite some time to figure it out.
Allow your databse to have more than one pending request per application. Change your connection string to MultipleActiveResultSets=True
I was asked to implement castle dynamic proxy in my asp.net web application and i was going through couple of articles which i got from Castle Project and Code Project about castle dynamic proxy in asp.net web application....
Both articles delt with creating interceptors but i can't get the idea why interceptors are used with classes.... Why should i intercept my class which is behaving properly?
Let's say that your class needs to do 3 things for a certain operation:
Perform a security check;
Log the method call;
Cache the result.
Let's further assume that your class doesn't know anything about the specific way you've configured your security, logging, or caching. You need to depend on abstractions of these things.
There are a few ways to go about it. One way would be to set up a bunch of interfaces and use constructor injection:
public class OrderService : IOrderService
{
private readonly IAuthorizationService auth;
private readonly ILogger logger;
private readonly ICache cache;
public OrderService(IAuthorizationService auth, ILogger logger,
ICache cache)
{
if (auth == null)
throw new ArgumentNullException("auth");
if (logger == null)
throw new ArgumentNullException("logger");
if (cache == null)
throw new ArgumentNullException("cache");
this.auth = auth;
this.logger = logger;
this.cache = cache;
}
public Order GetOrder(int orderID)
{
auth.AssertPermission("GetOrder");
logger.LogInfo("GetOrder:{0}", orderID);
string cacheKey = string.Format("GetOrder-{0}", orderID);
if (cache.Contains(cacheKey))
return (Order)cache[cacheKey];
Order order = LookupOrderInDatabase(orderID);
cache[cacheKey] = order;
return order;
}
}
This isn't horrible code, but think of the problems we're introducing:
The OrderService class can't function without all three dependencies. If we want to make it so it can, we need to start peppering the code with null checks everywhere.
We're writing a ton of extra code to perform a relatively simple operation (looking up an order).
All this boilerplate code has to be repeated in every method, making for a very large, ugly, bug-prone implementation.
Here's a class which is much easier to maintain:
public class OrderService : IOrderService
{
[Authorize]
[Log]
[Cache("GetOrder-{0}")]
public virtual Order GetOrder(int orderID)
{
return LookupOrderInDatabase(orderID);
}
}
In Aspect Oriented Programming, these attributes are called Join Points, the complete set of which is called a Point Cut.
Instead of actually writing dependency code, over and over again, we leave "hints" that some additional operations are supposed to be performed for this method.
Of course, these attributes have to get turned into code sometime, but you can defer that all the way up to your main application code, by creating a proxy for the OrderService (note that the GetOrder method has been made virtual because it needs to be overridden for the service), and intercepting the GetOrder method.
Writing the interceptor might be as simple as this:
public class LoggingInterceptor : IInterceptor
{
public void Intercept(IInvocation invocation)
{
if (Attribute.IsDefined(invocation.Method, typeof(LogAttribute))
{
Console.Writeline("Method called: "+ invocation.Method.Name);
}
invocation.Proceed();
}
}
And creating the proxy would be:
var generator = new ProxyGenerator();
var orderService = (IOrderService)generator.CreateClassProxy(typeof(OrderService),
new LoggingInterceptor());
This is not only a lot less repetitive code, but it completely removes the actual dependency, because look what we've done - we don't even have an authorization or caching system yet, but the system still runs. We can just insert the authorization and caching logic later by registering another interceptor and checking for AuthorizeAttribute or CacheAttribute.
Hopefully this explains the "why."
Sidebar: As Krzysztof Koźmic comments, it's not a DP "best practice" to use a dynamic interceptor like this. In production code, you don't want to have the interceptor running for unnecessary methods, so use an IInterceptorSelector instead.
The reason you would use Castle-DynamicProxy is for what's called Aspect Orientated Programming. It lets you interject code into the standard operation flow of your code without the need to become dependent on the code itself.
A simple example is as always, logging. That you would create a DynamicProxy around a class that you have errors from that it logs the data going into the method and catches any exceptions and then logs the exception.
Using the intercepter your current code has no idea it exists (assuming you have your software built in a decoupled way with interfaces correctly) and you can change the registration of your classes with an inversion of control container to use the proxied class instead without having to change a single line else where in code. Then when you solve the bug you can turn off the proxying.
More advanced usage of proxying can be seen with NHibernate where all of the lazy loading is handled through proxies.