Recently, I successfully created a long-polling service using HttpAsyncHandler’s. During the development it came to me (that) I “might” be able to re-use the AsyncResult object many times without long-polling repeatedly. If possible, I could then “simulate” push-technology by re-building or re-using the AsyncResult somehow (treating the first request as though it were a subscription-request).
Of course, the first call works great, but subsequent calls keep giving me “Object not set to an instance of an object”. I am “guessing” it is because certain objects are static, and therefore, once "completed" cannot be reused or retrieved (any insight there would be AWESOME!).
So the question is…
Is it possible to build dynamically a new callback from the old callback?
The initial "subscription" process goes like this:
public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData)
{
Guid id = new Guid(context.Request["Key"]);
AsyncResult request = new AsyncResult(cb, context, id);
Service.Singleton.Subscribe(request);
return request;
}
Here is an example of what the service does:
private void MainLoop()
{
while (true)
{
if (_subscribers.Count == 0)
{
if (_messages.Count == max)
_messages.Clear();
}
else
{
if (_messages.Count > 0)
{
Message message = _messages.Dequeue();
foreach (AsyncResult request in _subscribers.ToArray())
{
if(request.ProcessRequest(message));
_subscribers.Remove(request);
}
}
}
Thread.Sleep(500);
}
}
Here is an example of what the AsyncResult.ProcessRequest() call does:
public bool ProcessRequest(Message message)
{
try
{
this.Response = DoSomethingUseful(message);
this.Response.SessionValid = true;
}
catch (Exception ex)
{
this.Response = new Response();
this.Response.SessionValid = false;
}
this.IsCompleted = true;
_asyncCallback(this);
return this.IsCompleted;
}
SO...WOULD SOMETHING LIKE THIS BE POSSIBLE?
I literally tried this and it didn't work...but is SOMETHING "like" it possible?
AsyncResult newRequest = new AsyncResult(request.cb, request.context, request.id);
if(request.ProcessRequest(message))
{
_subscribers.Remove(request);
Subscribers.Add(newRequest);
}
IAsyncResult implementations must satisfy certain invariants, one of which is that it can only be completed once. You don't identify the AsyncResult you're using, but if it's Richter's famous version, then it would uphold that invariant.
If you don't want to go through the trouble of implementing the event-based asynchronous pattern, then the best option is Microsoft Rx, which is a true push-based system.
Let me first preface by saying I am completely unfamiliar with IHttpAsyncHandler interface and usage.
That being said, in general when using an asynchronous programming model, each AsyncResult represents a specific asynchronous method call and should not be reused. Seems like you are looking more for a RegisterEvent(callback) method than a BeginProcessing(callback method) - so even if you were able to get this to work, the design does not hold by asynchonous programming best practices (IMHO).
I assume that since you are using http which is request/response based, it seems unlikely that you will be able to push multiple responses for one request and even if you were able to somehow hack this up, your client will eventually get a timeout due to its unanswered request which would be problematic for what you are going for.
I know in Remoting you can register for remote events and WCF supports duplex contracts which can enable "push technology" if this is an option for you.
Good luck.
Related
We are using the following method in a Stateful Service on Service-Fabric. The service has partitions. Sometimes we get a FabricNotReadableException from this peace of code.
public async Task HandleEvent(EventHandlerMessage message)
{
var queue = await StateManager.GetOrAddAsync<IReliableQueue<EventHandlerMessage>>(EventHandlerServiceConstants.EventHandlerQueueName);
using(ITransaction tx = StateManager.CreateTransaction())
{
await queue.EnqueueAsync(tx, message);
await tx.CommitAsync();
}
}
Does that mean that the partition is down and is being moved? Of that we hit a secondary partition? Because there is also a FabricNotPrimaryException that is being raised in some cases.
I have seen the MSDN link (https://msdn.microsoft.com/en-us/library/azure/system.fabric.fabricnotreadableexception.aspx). But what does
Represents an exception that is thrown when a partition cannot accept reads.
mean? What happened that a partition cannot accept a read?
Under the covers Service Fabric has several states that can impact whether a given replica can safely serve reads and writes. They are:
Granted (you can think of this as normal operation)
Not Primary
No Write Quorum (again mainly impacting writes)
Reconfiguration Pending
FabricNotPrimaryException which you mention can be thrown whenever a write is attempted on a replica which is not currently the Primary, and maps to the NotPrimary state.
FabricNotReadableException maps to the other states (you don't really need to worry or differentiate between them), and can happen in a variety of cases. One example is if the replica you are trying to perform the read on is a "Standby" replica (a replica which was down and which has been recovered, but there are already enough active replicas in the replica set). Another example is if the replica is a Primary but is being closed (say due to an upgrade or because it reported fault), or if it is currently undergoing a reconfiguration (say for example that another replica is being added). All of these conditions will result in the replica not being able to satisfy writes for a small amount of time due to certain safety checks and atomic changes that Service Fabric needs to handle under the hood.
You can consider FabricNotReadableException retriable. If you see it, just try the call again and eventually it will resolve into either NotPrimary or Granted. If you get FabricNotPrimary exception, generally this should be thrown back to the client (or the client in some way notified) that it needs to re-resolve in order to find the current Primary (the default communication stacks that Service Fabric ships take care of watching for non-retriable exceptions and re-resolving on your behalf).
There are two current known issues with FabricNotReadableException.
FabricNotReadableException should have two variants. The first should be explicitly retriable (FabricTransientNotReadableException) and the second should be FabricNotReadableException. The first version (Transient) is the most common and is probably what you are running into, certainly what you would run into in the majority of cases. The second (non-transient) would be returned in the case where you end up talking to a Standby replica. Talking to a standby won't happen with the out of the box transports and retry logic, but if you have your own it is possible to run into it.
The other issue is that today the FabricNotReadableException should be deriving from FabricTransientException, making it easier to determine what the correct behavior is.
Posted as an answer (to asnider's comment - Mar 16 at 17:42) because it was too long for comments! :)
I am also stuck in this catch 22. My svc starts and immediately receives messages. I want to encapsulate the service startup in OpenAsync and set up some ReliableDictionary values, then start receiving message. However, at this point the Fabric is not Readable and I need to split this "startup" between OpenAsync and RunAsync :(
RunAsync in my service and OpenAsync in my client also seem to have different Cancellation tokens, so I need to work around how to deal with this too. It just all feels a bit messy. I have a number of ideas on how to tidy this up in my code but has anyone come up with an elegant solution?
It would be nice if ICommunicationClient had a RunAsync interface that was called when the Fabric becomes ready/readable and cancelled when the Fabric shuts down the replica - this would seriously simplify my life. :)
I was running into the same problem. My listener was starting up before the main thread of the service. I queued the list of listeners needing to be started, and then activated them all early on in the main thread. As a result, all messages coming in were able to be handled and placed into the appropriate reliable storage. My simple solution (this is a service bus listener):
public Task<string> OpenAsync (CancellationToken cancellationToken)
{
string uri;
Start ();
uri = "<your endpoint here>";
return Task.FromResult (uri);
}
public static object lockOperations = new object ();
public static bool operationsStarted = false;
public static List<ClientAuthorizationBusCommunicationListener> pendingStarts = new List<ClientAuthorizationBusCommunicationListener> ();
public static void StartOperations ()
{
lock (lockOperations)
{
if (!operationsStarted)
{
foreach (ClientAuthorizationBusCommunicationListener listener in pendingStarts)
{
listener.DoStart ();
}
operationsStarted = true;
}
}
}
private static void QueueStart (ClientAuthorizationBusCommunicationListener listener)
{
lock (lockOperations)
{
if (operationsStarted)
{
listener.DoStart ();
}
else
{
pendingStarts.Add (listener);
}
}
}
private void Start ()
{
QueueStart (this);
}
private void DoStart ()
{
ServiceBus.WatchStatusChanges (HandleStatusMessage,
this.clientId,
out this.subscription);
}
========================
In the main thread, you call the function to start listener operations:
protected override async Task RunAsync (CancellationToken cancellationToken)
{
ClientAuthorizationBusCommunicationListener.StartOperations ();
...
This problem likely manifested itself here as the bus in question already had messages and started firing the second the listener was created. Trying to access anything in state manager was throwing the exception you were asking about.
I consume a WCF service asynchronously. If I can't connect to the service or an exception occurs it went to faulted state and it writes the error to the Error property of the AsyncCompletedEventArgs.
What do I have to do with the service client? I cannot close it because it would throw a CommunicationObjectFaultedException. What else do I have to do after logging the error?
Here's my code:
MyServiceClient serviceClient = new MyServiceClient();
//Close the connection with the Service or log an error
serviceClient.JustAMethod += (object sender, AsyncCompletedEventArgs args) =>
{
if (args.Error != null)
{
//Log error
ErrorHandler.Log(args.Error);
}
else
{
serviceClient.Close();
}
};
//Call the service
serviceClient.JustAMethodAsync();
You can abort it, and create a new one. Here's a fragment from a class I wrote that deals with that issue. Everything that it touches here is legal to touch when the client is in the faulted state.
if (_client.InnerChannel.State == CommunicationState.Faulted)
{
_client.Abort();
_client = new TServiceClient();
}
TServiceClient is any subclass of System.ServiceModel.ClientBase<TIClientInterface>.
I wrote that because I've had constant access issues calling webservices from the server end of an MVC4 web app, with the browser client accessing the page via RDS.
However, as of now, the above code isn't in use. For reasons I don't understand, it had a lot more access-denied exceptions than the simplest approach of invariably creating a new client for every call, and disposing it after. I never bother checking faulted state because I never use them for more than one call anyway.
using (var cli = new Blah.Blah.FooWCFClient())
{
_stuff = cli.GetStuff();
}
...in a try/catch, of course. If you see any issues with the client-caching/Abort approach, I'd suggest you try creating a new client for every call. Maybe it costs a few cycles, but it's silly to call a web service and then start worrying about runtime efficiency. That horse has left the barn.
I don't know how this would interact with the asynchronous business, other than a vague intuition about keeping things simple and not sharing anything across threads.
Welcome to my nightmare. I haven't yet identified the cause of our access issues, but I doubt things can possibly be that bad for you. So I hope at least one of those two options will work out.
UPDATE
Here's some .tt-generated service wrapper code from our XAML application. Every web service call method gets wrapped like this, and it's been bulletproof for years. I would recommend doing essentially this:
public static POCO.Thing GetThing(int thingID)
{
var proxy = ServiceFactory.CreateNewFooWCFClientInstance();
try
{
var returnValue = proxy.GetThing(thingID);
proxy.Close();
return returnValue;
}
catch(Exception ex)
{
// ***********************************
// Error logging boilerplate redacted
// ***********************************
proxy.Abort();
throw;
}
}
I have a feeling that it's just as well if you don't reuse WCF client objects at all.
There is not much you can do with it. Create a new one and let the garbage collector collect the other one.
I'm attempting to write a c# wrapper around a third-party library written in native code for consumption in our apps, which are almost exclusively written in .NET, and I'm trying to remain faithful to the C# patterns. Almost all the calls in this library are asynchronous in nature, and it would seem appropriate to wrap all my async calls into Task<T> objects. Here's an oversimplified example of how the native library is structured:
delegate void MyCallback(string outputData);
class MyNativeLibrary
{
public int RegisterCallback(MyCallback callback); // returns -1 on error
public int RequestData(string inputData); // returns -1 on error
}
Right now, I've provided my return values through event subscription, however I believe this would be a far better way to return my data:
class WrapperAroundNativeCode
{
public async Task<string> RequestData(string inputData);
}
So far I've been unsuccessful in finding an appropriate way to implement this, and I'm reaching out to folks with more experience in working with Task<T> objects and the async/await pattern than I do.
You would use a TaskCompletionSource<TResult> for this. Something along the lines of the following code:
class WrapperAroundNativeCode
{
public async Task<string> RequestData(string inputData)
{
var completionSource = new TaskCompletionSource<string>();
var result = Native.RegisterCallback(s => completionSource.SetResult(s));
if(result == -1)
{
completionSource.SetException(new SomeException("Failed to set callback"));
return completionSource.Task;
}
result = Native.RequestData(inputData);
if(result == -1)
completionSource.SetException(new SomeException("Failed to request data"));
return completionSource.Task;
}
}
This answer assumes that there won't be concurrent calls to this method. If there were you would need some way to differentiate between the different calls. Many APIs provide a userData payload that you can set to a unique value per call, so that you can differentiate.
It sounds like you're looking for TaskCompletionSource<T>. You'd wrap your library by creating a TaskCompletionSource, creating an instance of MyNativeLibrary and registering a callback which set the result of the task completion source, and then requesting data from same instance. If either of these steps fails, set an error on the task completion source. Then just return the value of the TaskCompletionSource<>.Task property to the caller.
(This is assuming you can create separate instances of MyNativeLibrary - if you can only create a single instance across your whole app, it gets a lot harder.)
I am reading from a REST service and need to handle "Wait and retry" for a heavily used service that will give me an error:
Too many queries per second
or
Server Busy
Generally speaking, since I have many REST services to call, how can I generically handle backoff logic that would occur when an exception occurs?
Is there any framework that has this built in? I'm just looking to write clean code that doesn't worry too much about plumbing and infrastructure.
You can wrap the attempt up within a method that handles the retry logic for you. For example, if you're using WebClient's async methods:
public async Task<T> RetryQuery<T>(Func<Task<T>> operation, int numberOfAttempts, int msecsBetweenRetries = 500)
{
while (numberOfAttempts > 0)
{
try
{
T value = await operation();
return value;
}
catch
{
// Failed case - retry
--numberOfAttempts;
}
await Task.Delay(msecsBetweenRetries);
}
throw new ApplicationException("Operation failed repeatedly");
}
You could then use this via:
// Try 3 times with 500 ms wait times in between
string result = await RetryQuery(async () => webClient.DownloadStringTaskAsync(url), 3);
Try and determine how many active requests can be active at a time and use a Semaphore.
It is a way to handle resource locking where the are multiple identical resources, but only a limited number of them.
Here's the MSDN documentation on semaphores
I recommend you look into the Transient Fault Handling Application Block, part of the Enterprise Library.
In the past, the EL has IMO been over-engineered and not that useful, but they've taken steps to address that; the TFHAB is one of the newer blocks that follows better design guidelines (again, IMO).
I am fairly new to Rx and am having trouble finding a solution to my problem. I am using Rx to commence a download through a client library. Currently it looks like:
private void DownloadStuff(string descriptor, Action<Stuff> stuffAction)
{
this.stuffDownloader.GetStuffObservable(descriptor).Subscribe(x => stuffAction(x))
}
Where stuffDownloader is a wrapper around download logic defined in the client library. But I encountered a problem where I call DownloadStuff too much, causing many downloads, and overwhelming the system. Now what I would like to do is
private void DownloadStuff(string descriptor, Action<Stuff> stuffAction)
{
this.stuffDownloader.GetStuffObservable(descriptor)
.SlowSubscribe(TimeSpan.FromMilliSeconds(50))
.Subscribe(x => stuffAction(x))
}
Where SlowSubscribe is some combination of Rx actions to only subscribe on some interval.
Normally I would just put these DownloadStuff calls on a queue and pull them off on an interval, but I've been trying to do more through Rx lately. Three solutions occur to me:
This functionality exists and can be done all on the subscription side.
This is possible but the infrastructure of the downloader is incorrect and should change (i.e. stuffDownloader needs to behave differently)
This shouldn't be done with Rx, do it another way.
It occurs to me #2 is possible by passing an IObservable of descriptors to the client library and somehow slowing how the descriptors get onto the Observable.
You could in theory use Rx to treat your requests as events. This way you could leverage the serializing nature of Rx to queue up downloads.
I would think that you network layer (or stuffDownloader) would do this for you, but if you want to join me for a hack....this is what I have come up with (Yeehaw!!)
1.
Dont pass an Action, use Rx!! You are basically losing the error handling here and setting yourself up for weird unhandled exceptions.
private void DownloadStuff(string descriptor, Action<Stuff> stuffAction)
becomes
private IObservable<Stuff> DownloadStuff(string descriptor)
2.
Now we just have one method calling another. Seems pointless. Throw away the abstraction.
3.
Fix the underlying. To me the stuffDownloader is not doing it's job. Update the interface to take an IScheduler. Now you can pass it a dedicated EventLoopScheduler to enforce the serialization of the work
public IObservable<Stuff> GetStuffObservable(string descriptor, IScheduler scheduler)
4.
Fix the implementation?
As you want to serialize your requests (hmmmm....) you can just make the call synchronous.
private Stuff GetSync(string description)
{
var request = (HttpWebRequest)WebRequest.Create("http://se300328:90/");
var response =request.GetResponse();
var stuff = MapToStuff(response);
return stuff;
}
Now you just call that in you other method
public IObservable<Stuff> GetStuffObservable(string descriptor, ISchedulerLongRunning scheduler)
{
return Observable.Create<Stuff>(o=>
{
try
{
var stuff = GetStuff(description);
o.OnNext(stuff);
o.OnCompleted();
}
catch(Exception ex)
{
o.OnError(ex);
}
return Disposable.Empty(); //If you want to be sync, you cant cancel!
})
.SubscribeOn(scheduler);
}
However, having done all of this, I am sure this is not what you really want. I would expect that there is a problem somewhere else in the system.
Another alternative is to consider using the Merge operator and it's max concurent feature?