Access to disposed closure warning using Azure Transient Fault Handling Retry Policy - c#

We have a worker role which processes records and sends Azure service bus messages as needed based on the results of the query, this is basically a queue processing service. As part of the best practices of using SQL Azure, we have wrapped all of our query statements with a retry policy (this detects transient errors and will retry based on the defined policy). Note that we actually send the message from within the using statement so there is no 'leak' of the db variable.
Inside of our using statement, ReSharper is throwing up the 'Access to Disposed Closure' warning, most likely because we are passing our DataContext as a func parameter of the retry policy.
My question is, am I OK in my assumption that ReSharper is not detecting this pattern correctly or are there alternative methods in how we write these functions in order to prevent the warning above?
The Code
The db variable in the retryPolicy.ExecuteAction is what is getting flagged
using (var db = new MyEntities())
{
var thingsToUpdate = retryPolicy.ExecuteAction(() => db.QueueTable.Where(x => x.UpdateType == "UpdateType" && x.DueNext < DateTime.UtcNow).Take(30).ToList());
if (!thingsToUpdate.Any())
{
return;
}
while (thingsToUpdate.Any())
{
var message = new ServiceMessage{
Type = "UpdateType",
Requests = thingsToUpdate.Select(x => new ServiceMessageRequest
{
LastRan = x.LastRan,
ParentItemId = x.ThingId,
OwnerId = x.Thing.ForiegnKeyid
}).ToList()
};
SendMessage("UpdateType", message);
foreach (var thing in thingsToUpdate )
{
thing.LastRan = DateTime.UtcNow;
thing.DueNext = DateTime.UtcNow.AddMinutes(10);
}
retryPolicy.ExecuteAction(() => db.SaveChanges());
thingsToUpdate = db.QueueTable.Where(x => x.UpdateType == "UpdateType" && x.DueNext < DateTime.UtcNow).Take(30).ToList());
}
}
Additional Information
I also posted this to the ReSharper forums for a broader audience and this particular issue was addressed in a little more detail over there. For posterity, you can find the question here.

I guess your ExecuteAction executes your lamdba immediately. Then you should annotate a lambda parameter from your ExecuteAction method with ReSharper's attribute [InstantHandle].
For example:
public void ExecuteAction([InstantHandle] Action action)
{
...
}
You can either import JetBrains.Annotations.dll to get this attribute or just copy all of attributes inside your project. See more info on JetBrains site here and here.

Related

MassTransit custom Query/Command topology for Request/Response mechanism

I'm currently reworking a microservices-based solution into a modular monolith with four APIs (pro, cyclist, management, thirdparty). One of the changes that need to be done is adapting the topology of our broker (RabbitMQ) so it fits our requirements. These requirements are shown on the diagram below.
The idea is that we currently always use the Request/Response mechanism for all our commands and queries and Publish mechanism for events, meaning that we always expect a response, whenever issuing a query (obviously) or a command.
We want the topology to support scaling in a way that if API1 (any instance of this executable) has multiple instances
commands/queries issued by any instance of the API1 will be executed by the consumers running in any instance of the API1 - this means that if both API1 and API2 executables have the same consumer, API2 consumers cannot execute commands/queries issued by the API2
when scaling, queues for commands and queries should not be scaled, just new consumers will be added and round robin should fire up
events are always received by all registered consumers so when scaling new queues are created
Right now I'm trying to figure out how to create this topology in MassTransit but I can't seem to get rid of the default publish exchange of type fanout. Here's the code that I use for automatic registration of command/queries endpoints and queues
private static IRabbitMqBusFactoryConfigurator AddNonEventConsumer<TConsumer>(
IRabbitMqBusFactoryConfigurator config,
IRegistration context)
where TConsumer : class, IConsumer
{
var routingKey = Assembly.GetEntryAssembly().GetName().Name;
var messageType = typeof(TConsumer)
.GetInterfaces()
?.First(i => i.IsGenericType)
?.GetGenericArguments()
?.First();
if (messageType == null)
{
throw new InvalidOperationException(
$"Message type could not be extracted from the consumer type. ConsumerTypeName=[{typeof(TConsumer).Name}]");
}
config.ReceiveEndpoint(e =>
{
// var exchangeName = new StringBuilder(messageType.FullName)
// .Replace($".{messageType.Name}", string.Empty)
// .Append($":{messageType.Name}")
// .ToString();
var exchangeName = messageType.FullName;
e.ConfigureConsumeTopology = false;
e.ExchangeType = ExchangeType.Direct;
e.Consumer<TConsumer>(context);
e.Bind(exchangeName, b =>
{
e.ExchangeType = ExchangeType.Direct;
b.RoutingKey = routingKey;
});
});
config.Send<TestCommand>(c =>
{
c.UseRoutingKeyFormatter(x => routingKey);
});
config.Publish<TestCommand>(c =>
{
c.ExchangeType = ExchangeType.Direct;
});
return config;
}
Again, we do want to use Request/Response mechanism for queries/commands and Publish mechanism for events (events are not a part of this question, it's a topic on its own, just queries/commands).
The question is - how do I configure endpoints and queues in this method in order to achieve the desired topology?
Alternative question - how else can I achieve my goal?
Cyclist? Pro? What kind of modular monolith is this anyway??
You're almost there, but need to configure a couple of additional items. First, when publishing, you'll need to set the routing key, which can be done using a routing key formatter. Also, configure the message type to use a direct exchange.
configurator.Send<TestCommand>(x =>
{
x.UseRoutingKeyFormatter(context => /* something that gets your string, pro/cyclist */);
});
config.Publish<TestCommand>(c =>
{
c.ExchangeType = ExchangeType.Direct;
});
Also, if you're using custom exchange names, I'd add a custom entity name formatter. This will change the exchange names used for message types, so you can stick with message types in your application – keeping all the magic string stuff in one place.
class CustomEntityNameFormatter :
IEntityNameFormatter
{
public string FormatEntityName<T>()
where T : class
{
return new StringBuilder(typeof(T).FullName)
.Replace($".{typeof(T).Name}", string.Empty)
.Append($":{typeof(T).Name}")
.ToString();
}
}
config.MessageTopology
.SetEntityNameFormatter(new CustomEntityNameFormatter());
Then, when configuring your receive endpoint, do not change the endpoint's exchange type, only the bound exchange to match the publish topology. Using an endpoint name formatter, custom for you application, you can configure it manually as shown.
var routingKey = Assembly.GetEntryAssembly().GetName().Name;
var endpointNameFormatter = new CustomEndpointNameFormatter();
config.ReceiveEndpoint(endpointNameFormatter.Message<TMessage>(), e =>
{
e.ConfigureConsumeTopology = false;
e.Bind<TMessage>(b =>
{
e.ExchangeType = ExchangeType.Direct;
b.RoutingKey = routingKey;
});
e.Consumer<TConsumer>(context);
});
This is just a rough sample to get your started. There is a direct exchange sample on GitHub that you can look at as well to see how various things are done in there. You could likely clean up the message type detection as well to avoid having to do all the type based reflection stuff, but that's more complex.

Nopcommerce Update entity issue

Using NopCommerce 3.8, Visual Studio 2015 proff.
I have created a plugin that is responsible for making restful calls to my Web API that exposes a different DB to that of Nop.
The process is run via a nop Task, it successfully pulls the data back and i can step through and manipulate as i see fit, no issues so far.
Issue comes when i try to update a record on the product table, i perform the update... but nothing happens no change, no error.
I believe this is due to the Context having no idea about my newly instantiated product object, however I'm drawing a blank on what i need to do in relation to my particular example.
Similar questions usually reference a "model" object that is part of the parameter of the method call, "model" has the method ToEntity which seems to be the answer in similar question in stack.
However my example doesn't have the ToEntity class/method possibly because my parameter is actually a list of products. To Clarify here my code.
Method in RestClient.cs
public async Task<List<T>> GetAsync()
{
try
{
var httpClient = new HttpClient();
var json = await httpClient.GetStringAsync(ApiControllerURL);
var taskModels = JsonConvert.DeserializeObject<List<T>>(json);
return taskModels;
}
catch (Exception e)
{
return null;
}
}
Method in my Service Class
public async Task<List<MWProduct>> GetProductsAsync()
{
RestClient<MWProduct> restClient = new RestClient<MWProduct>(ApiConst.Products);
var productsList = await restClient.GetAsync();
InsertSyncProd(productsList.Select(x => x).ToList());
return productsList;
}
private void InsertSyncProd(List<MWProduct> inserted)
{
var model = inserted.Select(x =>
{
switch (x.AD_Action)
{
case "I":
//_productService.InsertProduct(row);
break;
case "U":
UpdateSyncProd(inserted);
.....
Then the method to bind and update
private void UpdateSyncProd(List<MWProduct> inserted)
{
var me = inserted.Select(x =>
{
var productEnt = _productRepos.Table.FirstOrDefault(ent => ent.Sku == x.Sku.ToString());
if(productEnt != null)
{
productEnt.Sku = x.Sku.ToString();
productEnt.ShortDescription = x.ShortDescription;
productEnt.FullDescription = x.FullDescription;
productEnt.Name = x.Name;
productEnt.Height = x.Pd_height != null ? Convert.ToDecimal(x.Pd_height) : 0;
productEnt.Width = x.Pd_width != null ? Convert.ToDecimal(x.Pd_width) : 0;
productEnt.Length = x.Pd_depth != null ? Convert.ToDecimal(x.Pd_depth) : 0;
productEnt.UpdatedOnUtc = DateTime.UtcNow;
}
//TODO: set to entity so context nows and can update
_productService.UpdateProduct(productEnt);
return productEnt;
});
}
So as you can see, I get the data and pass data through to certain method based on a result. From that list in the method I iterate over, and pull the the entity from the table, then update via the product service using that manipulated entity.
So what am I missing here, I'm sure its 1 step, and i think it may be either be because 1) The context still has no idea about the entity in question, or 2) Its Incorrect calls.
Summary
Update is not updating, possibly due to context having no knowledge OR my methodology is wrong. (probably both).
UPDATE:
I added some logger.inertlog all around my service, it runs through fine, all to the point of the call of update. But again I check the product and nothing has changed in the admin section.
plugin
I have provided the full source as i think maybe this has something to do with the rest of the code setup possibly?
UPDATE:
Added the following for testin on my execute method.
var myprod = _productRepos.GetById(4852);
myprod.ShortDescription = "db test";
productRepos.Update(myprod);
This successfully updates the product description. I moved my methods from my service into the task class but still no luck. The more i look at it the more im thinking that my async is killing off the db context somehow.
Turned of async and bound the getbyid to a new product, also removed the lambda for the switch and changed it to a foreach loop. Seems to finally update the results.
Cannot confirm if async is the culprit, currently the web api seems to be returning the same result even though the data has changed (some wierd caching by deafult in .net core? ) so im creating a new question for that.
UPDATE: It appears that the issue stems from poor debugging of async. Each instance I am trying to iterate over an await call, simply put im trying to iterate over a collection that technically may or may not be completed yet. And probably due to poor debugging, I was not aware.
So answer await your collection Then iterate after.

How can I Lock an Azure Table partition in an Azure Function using IQueryable and IAsyncCollector?

I'm fiddling with Azure Functions, combining it with CQRS and event sourcing. I'm using Azure Table Storage as an Event Store. The code below is a simplified version to not distract from the problem.
I'm not interested in any code tips, since this is not a final version of the code.
public static async Task Run(BrokeredMessage commandBrokeredMessage, IQueryable<DomainEvent> eventsQueryable, IAsyncCollector<IDomainEvent> eventsCollector, TraceWriter log)
{
var command = commandBrokeredMessage.GetBody<FooCommand>();
var committedEvents = eventsQueryable.Where(e => e.PartitionKey = command.AggregateRootId);
var expectedVersion = committedEvents .Max(e => e.Version);
// some domain logic that will result in domain events
var uncommittedEvents = HandleFooCommand(command, committedEvents);
// using(Some way to lock partition)
// {
var currentVersion = eventsQueryable.Where(e => e.PartitionKey = command.AggregateRootId).Max(e => e.Version);
if(expectedVersion != currentVersion)
{
throw new ConcurrencyException("expected version is not the same as current version");
}
var i = currentVersion;
foreach (var domainEvent in uncommittedEvents.OrderBy(e => e.Timestamp))
{
i++;
domainEvent.Version = i;
await eventsCollector.AddAsync(domainEvent);
}
// }
}
public class DomainEvent : TableEntity
{
private string eventType;
public virtual string EventType
{
get { return eventType ?? (eventType = GetType().UnderlyingSystemType.Name); }
set { eventType = value; }
}
public long Version { get; set; }
}
My efforts
To be fair, I could not try anything, because I don't know where to start and if this is even possible. Id did some research which did not solve my problem, but could help you solve this problem.
Do Azure Tables support locking?
yes, they do: Managing Concurrency in Microsoft Azure Storage. It's called leasing, but I do not know how to implement this in an Azure Function.
Other sources
Azure Functions triggers and bindings developer reference
Azure Functions C# developer reference
Tips, suggestions, alternatives
I'm always open to any suggestions on how to solve problems, but I cannot accept these as an answer to my question. Unless the answer to my question is "no", I can not mark an alternative as an answer. I'm not seeking for the best way to solve my problem, I want it to work the way I engineered it. I know this is stubborn, but this is practice/fiddling.
Blob leases would indeed work pretty well for what you're trying to accomplish (the Functions runtime actually makes extensive use of that internally).
If, before working on a partition, you acquire a lease on a blob (by convention, a blob named after the partition, or something like that) you'd be able to ensure only a given function is working on that partition.
The article you've linked to does show an example of lease acquisition and release, you can find more information in the documentation.
One thing you want to ensure is that you flush your collector before you leave the lock scope (by calling FlushAsync on it)
I hope this helps!

How to get access to application context from the thread that was created via Task.Factory.StartNew

I have a method that is ran in the main application thread, but creates new Task for long-running operation and awaits it's result.
public async Task<CalculatorOutputModel> CalculateXml(CalculatorInputModel input)
{
// 1
return await Task.Factory.StartNew(
new Func<CalculatorOutputModel> (() =>
{
// 2
using (var ms = Serializer.Serialize(input))
{
ms.Position = 0;
using (var rawResult = Channel.RawGetFrbXmlOutputs(ms)) // 3
{
var result = Parser.Parse(rawResult);
return result;
}
}
}),
CancellationToken.None,
TaskCreationOptions.None,
TaskScheduler.FromCurrentSynchronizationContext());
}
My problem is that the AppContext in points (1) and (2) are "a bit" different. In point 1 there is usual application context in Current property, that has all the data I need. In point 2, as you understand, there is null reference in AppContext.Current, so I can't access any data. The problem with accessing context in point 2 seems to be easy, just "catch" current context in local variable or pass it as the parameter. The problem for me is more difficult because I need to access the context somewhere in the deep of line marked as "3".
The class itself is derived from System.ServiceModel.ClientBase<TChannel> and the place where I need to get access to the context is the class that implements IClientMessageInspector.
class CalculatorMessageInspector : IClientMessageInspector
{
public object BeforeSendRequest(ref Message request, IClientChannel channel)
{
if (AppContext.Current != null)
{
// never goes here
}
}
}
Just to clarify, here is the call stack (I can't pass my context inside required method):
So:
I can't pass context to Channel's method because it does not make any sense;
I can't save required parameters from context in Channel because it is proxy class;
I can't save required parameters in CalculatorMessageInspector because it is created within the place where current context is already null.
Can anyone advise any method how can I stay within the same context in another thread? Or, at least, how can I pass parameter from the place marked "2" inside the methods hierarchy? Maybe, I can use SynchronizationContext somehow to achieve it? Thanks a lot for any suggestions.
Update
AppContext.Current consider to be the same as HttpContext.Current != null ? HttpContext.Current.Items[AppContextKey] : null
Update 2
So it seems, that in this case there is no common solution to persist the context. Hence, the only applicable solution in this concrete situation (that is quite specific) is to capture required parameters using closure and save then in an object, that will be available in required method (for me worked adding properties to the implementator of IEndpointBehavior, but that solution is a bit odd). Thereby, the most applicable solution is to throw away the asynchronous wrapping over the sync call, so the AppContext never "goes away". Mark Stephen's answer as the right then.
To use async and await on ASP.NET, you must target .NET 4.5 and turn off quirks mode. The best way to do this is to set /configuration/system.web/httpRuntime#targetFramework="4.5" in your web.config.
Also, you shouldn't use Task.Run or Task.Factory.StartNew on ASP.NET. So your code should look more like this:
public CalculatorOutputModel CalculateXml(CalculatorInputModel input)
{
using (var ms = Serializer.Serialize(input))
{
ms.Position = 0;
using (var rawResult = Channel.RawGetFrbXmlOutputs(ms))
{
var result = Parser.Parse(rawResult);
return result;
}
}
}

C# WCF closing channels and using functions Func<T>

This is the point, I have a WCF service, it is working now. So I begin to work on the client side. And when the application was running, then an exception showed up: timeout. So I began to read, there are many examples about how to keep the connection alive, but, also I found that the best way, is create channel, use it, and dispose it. And honestly, I liked that. So, now reading about the best way to close the channel, there are two links that could be useful to anybody who needs them:
1. Clean up clients, the right way
2. Using Func
In the first link, this is the example:
IIdentityService _identitySvc;
...
if (_identitySvc != null)
{
((IClientChannel)_identitySvc).Close();
((IDisposable)_identitySvc).Dispose();
_identitySvc = null;
}
So, if the channel is not null, then is closed, disposed, and assign null. But I have a little question. In this example the channel has a .Close() method, but, in my case, intellisense is not showing a Close() method. It only exists in the factory object. So I believe I have to write it. But, in the interface that has the contracts or the class that implemets it??. And, what should be doing this method??.
Now, the next link, this has something I haven't try before. Func<T>. And after reading the goal, it's quite interesting. It creates a funcion that with lambdas creates the channel, uses it, closes it, and dipose it. This example implements that function like a Using() statement. It's really good, and a excellent improvement. But, I need a little help, to be honest, I can't understand the function, so, a little explanatino from an expert will be very useful. This is the function:
TReturn UseService<TChannel, TReturn>(Func<TChannel, TReturn> code)
{
var chanFactory = GetCachedFactory<TChannel>();
TChannel channel = chanFactory.CreateChannel();
bool error = true;
try {
TReturn result = code(channel);
((IClientChannel)channel).Close();
error = false;
return result;
}
finally {
if (error) {
((IClientChannel)channel).Abort();
}
}
}
And this is how is being used:
int a = 1;
int b = 2;
int sum = UseService((ICalculator calc) => calc.Add(a, b));
Console.WriteLine(sum);
Yep, I think is really, really good, I'd like to understand it to use it in the project I have.
And, like always, I hope this could be helpful to a lot of people.
the UseService method accepts a delegate, which uses the channel to send request. The delegate has a parameter and a return value. You can put the call to WCF service in the delegate.
And in the UseService, it creates the channel and pass the channel to the delegate, which should be provided by you. After finishing the call, it closes the channel.
The proxy object implements more than just your contract - it also implements IClientChannel which allows control of the proxy lifetime
The code in the first example is not reliable - it will leak if the channel is already busted (e.g. the service has gone down in a session based interaction). As you can see in the second version, in the case of an error it calls Abort on the proxy which still cleans up the client side
You can also do this with an extension method as follows:
enum OnError
{
Throw,
DontThrow
}
static class ProxyExtensions
{
public static void CleanUp(this IClientChannel proxy, OnError errorBehavior)
{
try
{
proxy.Close();
}
catch
{
proxy.Abort();
if (errorBehavior == OnError.Throw)
{
throw;
}
}
}
}
However, the usage of this is a little cumbersome
((IClientChannel)proxy).CleanUp(OnError.DontThrow);
But you can make this more elegant if you make your own proxy interface that extends both your contract and IClientChannel
interface IPingProxy : IPing, IClientChannel
{
}
To answer the question left in the comment in Jason's answer, a simple example of GetCachedFactory may look like the below. The example looks up the endpoint to create by finding the endpoint in the config file with the "Contract" attribute equal to the ConfigurationName of the service the factory is to create.
ChannelFactory<T> GetCachedFactory<T>()
{
var endPointName = EndPointNameLookUp<T>();
return new ChannelFactory<T>(endPointName);
}
// Determines the name of the endpoint the factory will create by finding the endpoint in the config file which is the same as the type of the service the factory is to create
string EndPointNameLookUp<T>()
{
var contractName = LookUpContractName<T>();
foreach (ChannelEndpointElement serviceElement in ConfigFileEndPoints)
{
if (serviceElement.Contract == contractName) return serviceElement.Name;
}
return string.Empty;
}
// Retrieves the list of endpoints in the config file
ChannelEndpointElementCollection ConfigFileEndPoints
{
get
{
return ServiceModelSectionGroup.GetSectionGroup(
ConfigurationManager.OpenExeConfiguration(
ConfigurationUserLevel.None)).Client.Endpoints;
}
}
// Retrieves the ConfigurationName of the service being created by the factory
string LookUpContractName<T>()
{
var attributeNamedArguments = typeof (T).GetCustomAttributesData()
.Select(x => x.NamedArguments.SingleOrDefault(ConfigurationNameQuery));
var contractName = attributeNamedArguments.Single(ConfigurationNameQuery).TypedValue.Value.ToString();
return contractName;
}
Func<CustomAttributeNamedArgument, bool> ConfigurationNameQuery
{
get { return x => x.MemberInfo != null && x.MemberInfo.Name == "ConfigurationName"; }
}
A better solution though is to let an IoC container manage the creation of the client for you. For example, using autofac it would like the following. First you need to register the service like so:
var builder = new ContainerBuilder();
builder.Register(c => new ChannelFactory<ICalculator>("WSHttpBinding_ICalculator"))
.SingleInstance();
builder.Register(c => c.Resolve<ChannelFactory<ICalculator>>().CreateChannel())
.UseWcfSafeRelease();
container = builder.Build();
Where "WSHttpBinding_ICalculator" is the name of the endpoint in the config file. Then later you can use the service like so:
using (var lifetime = container.BeginLifetimeScope())
{
var calc = lifetime.Resolve<IContentService>();
var sum = calc.Add(a, b);
Console.WriteLine(sum);
}

Categories