Grouping same WCF Requests - c#

If a WCF Service gets the same request, means the same MD5 over all Parameters i want to block all except the first request until the processing is done and notify all waiting clients.
What is the best way doing this? I thaugh of something like a channel sink, maybe there is a finnished implementation for archieving this?

I'm not sure about what would be the 'best' fit for the WCF architecture but you should consider setting your InstanceContextMode to Single as you're likely to be doing a lot of synchronization steps for what you want to do here.
How about something like this? You will obviously need to do some synchronization on the dictionary itself, but at least it's a start.
private IDictionary<string, RequestToken> RequestTokens =
new Dictionary<string, RequestToken>();
public MyResponse MyMethod(MyRequest request)
{
// get the MD5 for the request
var md5 = GetMD5Hash(request);
// check if another thread is processing/has processed an identical request
RequestToken token;
if (RequestTokens.TryGetValue(md5, out token))
{
// if the token exists already then wait till we can acquire the lock
// which indicates the processing has finished and a response is ready
// for us to reuse
lock (token.Sync)
{
return token.Response;
}
}
else
{
var token = new Token(md5);
lock (token.Sync)
{
RequestTokens.Add(md5, token);
// do processing here..
var response = ....
token.Response = response;
return response;
}
}
}
private class RequestToken
{
private readonly object _sync = new object();
public RequestToken(string md5)
{
MD5 = md5;
}
public string MD5 { get; private set; }
public object Sync { get { return _sync; } }
public MyResponse Response { get; set; }
}
To me, this is something I'd want to abstract away from my business logic, and I'll personally use PostSharp and write a little attribute to handle all this.
I've written a Memoizer attribute which does something similar in the lines of caching responses based on request but without the synchronization steps, so you could probably take a look at what I've done and modify it accordingly to achieve what you're after.

At first, I am thinking of a custom implementation for a hashtable to keep the MD%'s being processed and lookup for the same before starting a new request processing.

Related

How to gracefully await while object not changed (no work code required!)

I would like to gather your advice on the following problem:
Task
There are two microservices running A and B. At some time microservice_A will create a request message (or array of RequestMessage[]) which is an example "RequestMessage" and send it to microservice_B.
public class RequestMessage
{
public Guid guid;
public string result;
public DateTime expirationDateTime;
public RequestMessage()
{
guid = Guid.NewGuid();
result = "no_result";
}
}
The way of getting a response is built in the way that there is a service class that implements the method, which will be called at some time after the request was sent. The call (of the function ResolveRequestedMessage()) will be performed under the hood and only function implementation lay on the developer.
public class RequestMessageResolver : IRequestMessageResolver
{
public bool TResolveRequestedMessage(Guid Id, string result)
{
// find the Request e.g in Request[] by id, Implementation lay on the developer side and can be various.
RequestCollection.Get(id).result = "resolved";
}
}
Problem
The RequestMessage should be awaitable, however the interface of the RequestMessage is defined and cant be changed.
Microservise_A should not proceed to any further action (within the call scope, however still be valid for some other requests e.g status/error/etc) until it id not get the resolution of the requested message.
My idea and thoughts
First I tried to create a wrapper class that will have TaskCompletionSource and can be set from outside (example 1). This works but required a lot of extra wrappers to achieve the desired results.
Another idea is to modify the wrapper to implement INotifyCompletion instead of having TaskCompletionSource, but not sure if this will bring big overhead and make the solution complex for no reason - did not try yet.
Code Example 1:
public class RequestMessageWrapper
{
public TaskCompletionSource<bool> completionSource;
public RequestMessage requestMessage;
public RequestMessageWrapper(RequestMessage requestMessage)
{
completionSource = new TaskCompletionSource<bool>();
this.requestMessage = requestMessage;
}
public async Task GetResponseAsync()
{
// also need to be cancelled somehow if (DateTime.Now > requestMessage.expirationDateTime)
await completionSource.Task;
}
}

How to efficiently count HTTP Calls in asp.net core?

I have an abstract class called HttpHelper it has basic methods like, GET, POST, PATCH, PUT
What I need to achieve is this:
Store the url, time & date in the database each time the function is called GET, POST, PATCH, PUT
I don't want to store directly to the database each time the functions are called (that would be slow) but to put it somewhere (like a static queue-memory-cache) which must be faster and non blocking, and have a background long running process that will look into this cache-storage-like which will then store the values in the database.
I have no clear idea how to do this but the main purpose of doing so is to take the count of each calls per hour or day, by domain, resource and url query.
I'm thinking if I could do the following:
Create a static class which uses ConcurrentQueue<T> to store data and call that class in each function inside HttpHelper class
Create a background task similar to this: Asp.Net core long running/background task
Or use Hangfire, but that might be too much for simple task
Or is there a built-in method for this in .netcore?
Both Hangfire and background tasks would do the trick as consumers of the queue items.
Hangfire was there before long running background tasks (pre .net core), so go with the long running tasks for net core implementations.
There is a but here though.
How important is to you that you will not miss a call? If it is, then neither can help you.
The Queue or whatever static construct you have will be deleted the time your application crashes/machine restarts or just plain recycling of the application pools.
You need to consider some kind of external Queuing mechanism like rabbit mq with persistence on.
You can also append to a file, but that might also cause some delays as read/write.
I do not know how complex your problem is but I would consider two solutions.
First is calling Async Insert Method which will not block your main thread but will start task. You can return response without waiting for your log to be appended to database. Since you want it to be implemented in only some methods, I would do it using Attributes and Middleware.
Simplified example:
public IActionResult SomePostMethod()
{
LogActionAsync("This Is Post Method");
return StatusCode(201);
}
public static Task LogActionAsync(string someParameter)
{
return Task.Run(() => {
// Communicate with database (X ms)
});
}
Better solution is creating buffer which will not communicate with database each time but only when filled or at interval. It would look like this:
public IActionResult SomePostMethod()
{
APILog.Log(new APILog.Item() { Date = DateTime.Now, Item1 = "Something" });
return StatusCode(201);
}
public partial class APILog
{
private static List<APILog.Item> _buffer = null;
private cont int _msTimeout = 60000; // Timeout between updates
private static object _updateLock = new object();
static APILog()
{
StartDBUpdateLoopAsync();
}
private void StartDBUpdateLoopAsync()
{
// check if it has been already and other stuff
Task.Run(() => {
while(true) // Do not use true but some other expression that is telling you if your application is running.
{
Thread.Sleep(60000);
lock(_updateLock)
{
foreach(APILog.Item item in _buffer)
{
//Import into database here
}
}
}
});
}
public static void Log(APILog.Item item)
{
lock(_updateLock)
{
if(_buffer == null)
_buffer = new List<APILog.Item>();
_buffer.Add(item);
}
}
}
public partial class APILog
{
public class Item
{
public string Item1 { get; set; }
public DateTime Date { get; set; }
}
}
Also in this second example I would not call APILog.Log() each time but use Middleware in combination with Attribute

Logging Hangfire jobs to Application Insights and correlating activity to an Operation Id

I feel like this should be a lot simpler than it's turning out to be, or I am just over thinking it too much.
I have a .NET Core 3.1 Web API application, which is using HangFire to process some jobs in the background. I have also configured Application Insights to log Telemetry from the .NET Core API.
I can see logging events and dependency telemetry data logged in Application Insights. However, each event/log/dependency is recorded against a unique OperationId and Parent Id.
I am trying to determine how to ensure that any activity which is logged, or any dependencies which are used in the context of the background job are logged against the OperationId and/or Parent Id of the original request which queued the background job.
When I queue a job, I can get the current OperationId of the incoming HTTP request, and I push that into the HangFire queue with the job. When the job is then performed, I can get back that OperationId. What I then need to do is make that OperationID available throughout the context/lifetime of the job execution, so that it is attached to any Telemetry sent to Application Insightd.
I thought I could create a IJobContext interface, which could be injected into the class which performs the job. Within that context I could push the OperationID. I could then create a ITelemetryInitializer which would also take the IJobContext as a dependency. In the ITelemetryInitializer I could then set the OperationID and ParentId of the telemetry being sent to Application Insights. Here's some simple code:
public class HangFirePanelMessageQueue : IMessageQueue
{
private readonly MessageProcessor _messageProcessor;
private readonly IHangFireJobContext _jobContext;
private readonly TelemetryClient _telemetryClient;
public HangFirePanelMessageQueue(MessageProcessor panelMessageProcessor,
IIoTMessageSerializer iotHubMessageSerialiser,
IHangFireJobContext jobContext, TelemetryClient telemetryClient)
{
_messageProcessor = panelMessageProcessor;
_jobContext = jobContext;
_telemetryClient = telemetryClient;
}
public async Task ProcessQueuedMessage(string message, string operationId)
{
var iotMessage = _iotHubMessageSerialiser.GetMessage(message);
_jobContext?.Set(iotMessage.CorrelationID, iotMessage.MessageID);
await _messageProcessor.ProcessMessage(iotMessage);
}
public Task QueueMessageForProcessing(string message)
{
var dummyTrace = new TraceTelemetry("Queuing message for processing", SeverityLevel.Information);
_telemetryClient.TrackTrace(dummyTrace);
string opId = dummyTrace.Context.Operation.Id;
BackgroundJob.Enqueue(() =>
ProcessQueuedMessage(message, opId));
return Task.CompletedTask;
}
}
The IJobContext would look something like this:
public interface IHangFireJobContext
{
bool Initialised { get; }
string OperationId { get; }
string JobId { get; }
void Set(string operationId, string jobId);
}
And then I would have an ITelemetryInitializer which enriches any ITelemetry:
public class EnrichBackgroundJobTelemetry : ITelemetryInitializer
{
private readonly IHangFireJobContext jobContext;
public EnrichBackgroundJobTelemetry(IHangFireJobContext jobContext)
{
this.jobContext = jobContext;
}
public void Initialize(ITelemetry telemetry)
{
if (!jobContext.Initialised)
{
return;
}
telemetry.Context.Operation.Id = jobContext.OperationId;
}
}
The problem I have however is that the ITelemetryInitializer is a singleton, and so it would be instantiated once with a IHangFireJobContext which would then never update for any subsequent HangFire job.
I did find the https://github.com/skwasjer/Hangfire.Correlate project, which extends https://github.com/skwasjer/Correlate. Correlate creates a correlation context which can be accessed via a ICorrelationContextAccessor which is similar to the IHttpContextAccessor.
However, the footnotes for Correlate state "Please consider that .NET Core 3 now has built-in support for W3C TraceContext (blog) and that there are other distributed tracing libraries with more functionality than Correlate." which lists Application Insights as one of the alternatives for more Advanced distributed tracing.
So can anyone help me understand how I can enrich any Telemetry going to Application Insights when it is created within the context of a HangFire job? I feel the correct answer is to use an ITelemetryInitializer and populate the OperationId on that ITelemetry item, however, I am not sure what dependancy to inject into the ITelemetryInitialzer in order to get access to the HangFire Job Context.
When I queue a job, I can get the current OperationId of the incoming HTTP request, and I push that into the HangFire queue with the job.
So, am I correct to say that you have a controller action that pushes work to hangfire? If so What you can do is inside the controller method get the operation id and pass it to the job. Use that operation id to start a new operation using the operation Id. That operation, together with all the telemetry generated during that operation, will be linked to the original request.
I have no hangfire integration but the code below shows the general idea: some work is queued to be done in the background and should be linked to the request regarding the telemetry:
[HttpGet("/api/demo5")]
public ActionResult TrackWorker()
{
var requestTelemetry = HttpContext.Features.Get<RequestTelemetry>();
_taskQueue.QueueBackgroundWorkItem(async ct =>
{
using(var op = _telemetryClient.StartOperation<DependencyTelemetry>("QueuedWork", requestTelemetry.Context.Operation.Id))
{
_ = await new HttpClient().GetStringAsync("http://blank.org");
await Task.Delay(250);
op.Telemetry.ResultCode = "200";
op.Telemetry.Success = true;
}
});
return Accepted();
}
The full example can be found here.
Working from Peter Bons' example I did it like this:
Code originally triggered from a controller action:
// Get the current ApplicationInsights Id. Could use .RootId if
// you only want the OperationId, but I want the ParentId too
var activityId = System.Diagnostics.Activity.Current?.Id;
_backgroundJobClient.Enqueue<JobDefinition>(x =>
x.MyMethod(queueName, otherMethodParams, activityId));
In my JobDefinition class:
// I use different queues, but you don't need to.
// otherMethodParams is just an example. Have as many as you need, like normal.
[AutomaticRetry(OnAttemptsExceeded = AttemptsExceededAction.Delete, Attempts = 10)]
[QueueNameFromFirstParameter]
public async Task MyMethod(string queueName, string otherMethodParams,
string activityId)
{
var (operationId, parentId) = SplitCorrelationIdIntoOperationIdAndParentId(
activityId);
// Starting this new operation will initialise
// System.Diagnostics.Activity.Current.
using (var operation = _telemetryClient.StartOperation<DependencyTelemetry>(
"JobDefinition.MyMethod", operationId, parentId))
{
try
{
operation.Telemetry.Data = $"something useful here";
// If you have other state you'd like in App Insights logs,
// call AddBaggage and they show up as a customDimension,
// e.g. in any trace logs.
System.Diagnostics.Activity.Current.AddBaggage("QueueName", queueName);
// ... do the real background work here...
operation.Telemetry.Success = true;
}
catch (Exception)
{
operation.Telemetry.Success = false;
throw;
}
}
}
// Splits full value from System.Diagnostics.Current.Activity.Id
// like "00-12994526f1cb134bbddd0f256e8bc3f0-872b3bd78c345a46-00"
// into values ( "12994526f1cb134bbddd0f256e8bc3f0", "872b3bd78c345a46" )
private static (string, string) SplitCorrelationIdIntoOperationIdAndParentId(string activityId)
{
if (string.IsNullOrEmpty(activityId))
return (null, null);
var splits = activityId.Split('-');
// This is what should happen
if (splits.Length >= 3)
return (splits[1], splits[2]);
// Must be in a weird format. Try to return something useful.
if (splits.Length == 2)
return (splits[0], splits[1]);
return (activityId, null);
}
I'm not sure using the OperationId and ParentId is quite right here, e.g. it does tie the background job to the originating request's OperationId, but if the originating Request has a ParentId then this background job should really have its ParentId set as the Request, not as the Request's ParentId. Anyone know?

.Net Core Async critical section if working on same entity

I need to be sure that a method accessed via a web API cannot be accessed by multiple call at the same time if it work on the same object with the same id
I understand the use of SemaphoreSlim but a simple implemetation of that will lock the critical section for all. But I need that section locked only if it works on the same entity and not on 2 different
This is my scenario, an user start to work, the entity is created and is ready to be modified, then one or more user can manipulate this entity, but a part of this manipulation has to be in a critical section or it will lead to inconsistent data, when the work is finished, the entity will be removed from the work status and moved to and archive and can only be accessed readonly
The class which contains that function is injected as transient in the startup of the application
services.AddTransient<IWorkerService>(f => new WorkerService(connectionString));
public async Task<int> DoStuff(int entityId)
{
//Not Critical Stuff
//Critical Stuff
ReadObjectFromRedis();
ManipulateObject();
UpdateSqlDatabase();
SaveObjectToRedis();
//Not Critical Stuff
}
How can I achieve that?
Try this, I'm not sure if those objects are available in .net-core
class Controller
{
private static ConcurrentDictionary<int, SemaphoreSlim> semaphores = new ConcurrentDictionary<int, SemaphoreSlim>();
public async Task<int> DoStuff(int entityId)
{
SemaphoreSlim sem = semaphores.GetOrAdd(entityId, ent => new SemaphoreSlim(0, 1));
await sem.WaitAsync();
try
{
//do real stuff
}
finally
{
sem.Release();
}
}
}
This is not an easy problem to solve. I have a similar problem with cache: I want that when cache expires only one call is made to repopulate it. Very common approach for token e.g. that you have to renew every now and then.
A problem with an ordinary use of semaphore is that after you exit, all threads that were waiting will just go in and do the call again, that's why you need double check locking to fix it. If you can have some local state for you case I am not sure (I suppose you do since you have a reason for doing only one call and have state most likely), but here is how I solved it for token cache:
private readonly SemaphoreSlim _semaphore = new SemaphoreSlim(1);
public async Task<string> GetOrCreateAsync(Func<Task<TokenResponse>> getToken)
{
string token = Get();
if (token == null)
{
await _semaphore.WaitAsync();
try
{
token = Get();
if (token == null)
{
var data = await getToken();
Set(data);
token = data.AccessToken;
}
}
finally
{
_semaphore.Release();
}
}
return token;
}
Now I don't really know if it is bullet proof. If it were ordinary double check locking (not async), then it is not, though explanation why is really hard and goes to how processor do multithreading behind the scenes and how they reorder instructions.
But in cache case if there is a double call once in a blue moon is not that big of a problem.
I have not found a better way to do that and this is an example provided by e.g. Scott Hanselman and found few places on Stack Overflow as well.
Use of a semaphore is overkill for this. A named mutex will suffice.
class Foo
{
public void Bar(int id)
{
using var mutex = new Mutex(false, id.ToString(), out var createdNew);
if (createdNew)
{
// Business logic here.
}
}
}

ServiceStack Performance

Let me start by saying I love the design of ServiceStack as a client. (I've never used it for server side)
I'm writing a C# wrapper for API calls and I keep getting timeout and authentication errors. I've contacted the developers at the other end and they assure me that there are no issues on their end and that I must be doing something wrong. Normally I wouldn't believe them and I'd build a sample project to demonstrate the issue but in this case they pointed me to a web page that will test the same API I'm running in C# and they can re-authenticate as fast as they can click the submit button. I forget the exact site they use for testing but enough of my story... I'm sure I'm doing something wrong I just don't know what.
Here's my Unit Test. If I run it by itself or with one copy it works fine (150-1100ms) but if I make 3 or more copies of it they I will get only 2-3 that pass and the rest will timeout.
[TestMethod]
[Timeout(5000)]
public void Login_Success1()
{
var client = new JsonServiceClient("apiurl");
var response = client.Login("XXXAccessKeyXXX", "XXXSecretKeyXXX");
//Assertions
}
This is my extension method:
public static class Extensions
{
public static (bool Success, string Message, string Token) Login(this JsonServiceClient client, string accessKey, string secretKey)
{
try
{
var response = client.Post(new LoginRequest(accessKey, secretKey));
var authorization = response.Headers.GetValues("Authorization")[0];
return (true, string.Empty, authorization);
}
catch (Exception ex)
{
return (false, $"Authentication failed: {ex.Message}", string.Empty);
}
}
}
And here's the login request:
[Route("/sessions")]
[DataContract]
internal class LoginRequest
{
internal LoginRequest(string accessKey, string secretKey)
{
AccessKey = accessKey ?? throw new ArgumentNullException(nameof(accessKey));
SecretKey = secretKey ?? throw new ArgumentNullException(nameof(secretKey));
}
[DataMember(Name = "accessKey")]
internal string AccessKey { get; set; }
[DataMember(Name = "secretKey")]
internal string SecretKey { get; set; }
}
I think this is all the relevant code but if you feel I missed something please lmk.
Your Request DTO's should implement either IReturn<T> or IReturnVoid otherwise if you're sending just an object you will call the deprecated Post() method:
/// <summary>
/// APIs returning HttpWebResponse must be explicitly Disposed, e.g using (var res = client.Post(url)) { ... }
/// </summary>
[Obsolete("Use: using (client.Post<HttpWebResponse>(requestDto) { }")]
public virtual HttpWebResponse Post(object requestDto)
{
return Send<HttpWebResponse>(HttpMethods.Post, ResolveTypedUrl(HttpMethods.Post, requestDto), requestDto);
}
Which because ServiceStack doesn't know how you want the Response deserialized it will return the open HttpWebResponse so you can inspect the Response yourself (as you're doing in your example). But this needs to be explicitly disposed as .NET's HttpWebRequest only allows a couple of concurrent requests open per domain which will cause your App to hang/timeout as it's waiting for Requests to be disposed to stay within the concurrent limit.
The preferred solution is to always annotate Request DTO's that you send with ServiceStack clients with a IReturn or a IReturn<T> interface marker, if it has none or you want to ignore the Response implement IReturnVoid otherwise implement IReturn<ResponseDtoType>:
class LoginRequest : IReturnVoid {}
Which instead calls the non-deprecated Post() method which disposes of the HttpWebResponse.
Otherwise if you want to send plain object DTO's you need to dispose of the HttpWebResponse after usage, e.g:
using (var response = client.Post<HttpWebResponse>(new LoginRequest(accessKey, secretKey)))
{
var authorization = response.Headers.GetValues("Authorization")[0];
}
API's which implicitly return HttpWebResponse were deprecated to avoid hard to identify issues like this, instead we recommend using the explicit API above which declares the HttpWebResponse return type at the call-site so it's easier to identify it needs to be disposed.
Also note the ServiceStack Service Clients are opinionated for calling ServiceStack Services, for calling other Services we recommend using HTTP Utils instead.

Categories