Is there a way to retrieve an existing NSUrlSession/cancel its task? - c#

I am creating an NSUrlSession for a background upload using a unique identifier.
Is there a way, say after closing and reopening the app, to retrieve that NSUrlSession and cancel the upload task in case it has not been processed yet?
I tried simply recreating the NSUrlSession using the same identifier to check whether it still contains the upload task, however it does not even allow me to create this session, throwing an exception like "A background URLSession with identifier ... already exists", which is unsurprising as documentation explicitly says that a session identifier must be unique.
I am trying to do this with Xamarin.Forms 2.3.4.270 in an iOS platform project.

Turns out I was on the right track. The error message "A background URLSession with identifier ... already exists" actually seems to be more of a warning, but there is not actually an exception thrown (the exception I had did not actually come from duplicate session creation).
So, you can in fact reattach to an existing NSUrlSession and will find the contained tasks still present, even after restarting the app. Just create a new configuration with the same identifier, use that to create a new session, ignore the warning that's printed out, and go on from there.
I am not sure if this is recommended for production use, but it works fine for my needs.
private async Task EnqueueUploadInternal(string uploadId)
{
NSUrlSessionConfiguration configuration = NSUrlSessionConfiguration.CreateBackgroundSessionConfiguration(uploadId);
INSUrlSessionDelegate urlSessionDelegate = (...);
NSUrlSession session = NSUrlSession.FromConfiguration(configuration, urlSessionDelegate, new NSOperationQueue());
NSUrlSessionUploadTask uploadTask = await (...);
uploadTask.Resume();
}
private async Task CancelUploadInternal(string uploadId)
{
NSUrlSessionConfiguration configuration = NSUrlSessionConfiguration.CreateBackgroundSessionConfiguration(uploadId);
NSUrlSession session = NSUrlSession.FromConfiguration(configuration); // this will print a warning
NSUrlSessionTask[] tasks = await session.GetAllTasksAsync();
foreach (NSUrlSessionTask task in tasks)
task.Cancel();
}

Related

Dialogs keep their variable values in new conversation in MS BotFramework v4

I'm using MS BotFramework v4. There is a RootDialog which starts Dialog_A or Dialog_B depending on what the user input is.
TL;DR
If a new conversation is started after a conversation and the bot isn't restarted, private variables of dialogs to which a value (which is not the initial value) has already be assigned, are not reseted to their initial value leading to unexpected behavior. How can this be avoided?
Detailed
Let's assume the following scenario:
Each of these dialogs has some private variables to control whether to output a long or a short introduction message. The long one should only be be outputted on the first time this dialog is started. If the conversation again reaches the dialog only the short message should be printed.
Implementation looks like this:
RootDialog.cs
public class RootDialog : ComponentDialog
{
private bool isLongWelcomeText = true;
// Some more private variables follow here
public RootDialog() : base("rootId") {
AddDialog(new WaterfallDialog(nameof(WaterfallDialog), new WaterfallStep[] {
WelcomeStep,
DoSomethingStep,
FinalStep
});
}
private async Task<DialogTurnContext> WelcomeStep(WaterfallStepContext ctx, CancellationToken token) {
if(isLongWelcomeText) {
await ctx.SendActivityAsync(MessageFactory.Text("A welcome message and some detailed bla bla about the bot"));
isLongWelcomeText = false;
} else {
await ctx.SendActivityAsync(MessageFactory.Text("A short message that hte bot is waiting for input"));
}
}
private async Task<DialogTurnContext> DoSomethingStep(WaterfallStepContext ctx, CancellationToken token) {
// call Dialog_A or Dialog_B depending on the users input
// Dialog X starts
await ctx.BeginDialogAsync("Dialog_X", null, token);
}
private async Task<DialogTurnContext> FinalStep(WaterfallStepContext ctx, CancellationToken token) {
// After dialog X has ended, RootDialog continues here and simply ends
await ctx.EndDialogAsync(null, token);
}
}
Dialog_A and Dialog_B are structured the same way.
The problem
If the bot handles its first conversation, everything works as expected (the long welcome text is printed to the user and isLongWelcomeText is set to false in WelcomeStep. When I then start a new conversation (new conversationId and userId) isLongWelcomeText still is set to false which leads to the bot outputting the short welcome text in a new conversation to a new user.
In BotFramework v3 dialogs were serialized and deserialized together with all variable values.
If I'm right in BF v4 dialogs aren't serialized anymore.
The question
How can this be fixed? Is there a better way to do this?
Remarks
I'm using UserState and ConversationState which are serialized and reset on new conversations. But I don't want to store every private variable value of each dialog in the states. This cannot be the way to go.
Thanks in advace
Generally you should think of it as a mistake to put instance member variables in a dialog class. There may be some cases when it could work, but those cases will not involve trying to persist some kind of state between turns. There are three main problems with using any kind of in-memory variables of your bot classes to persist state between turns:
It won't be scoped correctly. This is the problem you noticed already. You've clearly defined your isLongWelcomeText as something that should be specific to a user and/or a conversation, but because it's in your bot's own memory that's used to process every conversation for every user then it won't be able to distinguish between different conversations and users.
It won't scale correctly. This means that even if your bot is just talking to one user in one conversation, if the bot is deployed in some hosting service like Azure that scales sideways then multiple instances of your bot may be running. Different instances of your bot will have different memory, so if you want to design a bot correctly then you need to act as though every turn will be processed by a totally different instance of the bot, maybe on a totally different server. One instance can't access the memory of another instance.
It will be lost when the app restarts. Even if you only have one user, one conversation, and one bot instance, you still want to be able to stop your bot and then start it again without ruining the conversation. If you're using the bot's memory then you can't do that.
The second two problems even apply when you're using MemoryStorage and not just when you're using in-memory variables. You may have guessed that the solution is to use bot state (and to connect the bot state to some storage class other than MemoryStorage when the bot is deployed).
You are correct that in v3 entire instance objects of dialog classes would get serialized into persistent state. That came with its own set of problems and didn't always make logical sense, so in v4 the things that gets serialized are DialogInstance objects. (Read about dialog instances here). Anything you want your dialog to keep track of, you should put in the associated dialog instance's state object, and the best place to see examples of how to do that is in the SDK source code itself. For example, you can see how a waterfall dialog keeps track of things like its custom values and what step it's on:
// Update persisted step index
var state = dc.ActiveDialog.State;
state[StepIndex] = index;

Task.Run occasionally fails silently when launched from MVC Controller

I am attempting to generate PDF copies of specific forms within my MVC application. As this is time consuming, and the client does not need to wait for this generation to happen, I'm trying to trigger this as a series of Fire and Forget Tasks.
One hang-up of note is that I need to have the HttpContext established, or some underlying pieces of the code that I can't alter won't work. I believe I have dealt with this problem, but I wanted to call it out in case it matters.
Here is the function I am calling...
private void AsyncPDFFormGeneration(string htmlOutput, string serverRelativePath, string serverURL, string signature, ScannedDocument document, HttpContext httpContext)
{
try
{
System.Web.HttpContext.Current = httpContext;
using (StreamWriter stw = new StreamWriter(Server.MapPath(serverRelativePath), false, System.Text.Encoding.Default))
{
stw.Write(htmlOutput);
}
Doc ABCDoc = new Doc();
ABCDoc.HtmlOptions.Engine = EngineType.Gecko;
int DocID = 0;
DocID = ABCDoc.AddImageUrl(serverURL + serverRelativePath + "?dumb=" + DateTime.Now.Hour.ToString() + DateTime.Now.Minute.ToString() + DateTime.Now.Second + DateTime.Now.Millisecond);
while (true)
{
ABCDoc.FrameRect();
if (!ABCDoc.Chainable(DocID))
break;
ABCDoc.TextStyle.LeftMargin = 100;
ABCDoc.Page = ABCDoc.AddPage();
DocID = ABCDoc.AddImageToChain(DocID);
}//End while (true...
for (int i = 1; i <= ABCDoc.PageCount; i++)
{
ABCDoc.PageNumber = i;
ABCDoc.Flatten();
}
ScannedDocuments.AddScannedDocument(document, ABCDoc.GetData());
System.IO.File.Delete(Server.MapPath(serverRelativePath));
}
catch (Exception e)
{
//Exception is logged to the database, and if that fails, to the Event Log
}
}
Within, I am writing the String output of the HTML contents of the MVC Form in question to an html file, handing the path to that file to the PDF writer, generating the PDF, and then deleting the html file.
I'm calling it inside of a Controller POST method, like so:
Task.Run(() => AsyncPDFFormGeneration(htmlOutput, serverRelativePath,
serverURL, signature, document, HttpContext.ApplicationInstance.Context));
This command is called as part of a foreach loop that constructs the forms, loads them into string format, and then passes them into a task. I've also tried this with
Task.Factory.StartNew
just in case something weird was going on with Task.Run, but that didn't produce a different result.
The problem I am having is that not all of the Tasks execute every time. If I run in Visual Studio and step my way through debugging, it works properly every time. However, when attempting to generate 11 forms sequentially, sometimes it generates all of them, sometimes it generated 3 or 4, sometimes it generates all but 1.
I have error logging set up to be as extensive as possible, but no exceptions are being thrown that I can find, and no generated html files are left lying around in my file structure on account of a thread aborted part-way.
There seems to be a slight correlation between how quickly the page comes back from the post, and how many of the forms are generated. A longer load time generally correlates to more of the forms being generated...but I was under the impression that shouldn't matter. I'm spinning these off to separate threads with their own copy of the HttpContext to take with them and carry around. Once launched, I did not think that the original thread should impact them.
Any ideas on why I'm only getting 3 successful Tasks on some attempts, all 11 on another attempt, and no exceptions?
Task.Run(() => AsyncPDFFormGeneration(htmlOutput, serverRelativePath,
serverURL, signature, document, HttpContext.ApplicationInstance.Context));
You have a subtle race condition on this line. The problem is with the HttpContext.ApplicationInstance.Context property. It will be evaluated when the task starts. If it happens before the end of the request, this is fine. But if for some reason the task takes a bit of time to start, then the request will complete first, and the HttpContext will be null. Therefore, you will have a null-reference exception, giving you the impression that the task didn't start (when, in fact, it did but crashed immediately outside of your try/catch).
To avoid that, just store the context in a local variable, and use it for Task.Run:
var context = HttpContext; // Or HttpContext.ApplicationInstance.Context, but I don't really see the point
Task.Run(() => AsyncPDFFormGeneration(htmlOutput, serverRelativePath, serverURL, signature, document, context));
That said, I don't know what API you are using that requres System.Web.HttpContext.Current to be set, but it seems a very bad choice for a fire-and-forget task. Even if you locally save the HttpContext, it'll still have been cleaned up, so I'm not sure it'll behave as expected.
Also, as was mentioned in the comments, launching fire-and-forget tasks on ASP.NET is dangerous. You should use HostingEnvironment.QueueBackgroundWorkItem instead.
I would try using await Task.WhenAll(task1, task2, task3, etc) as your application may be closing before all tasks have completed.

What does the FabricNotReadableException mean? And how should we respond to it?

We are using the following method in a Stateful Service on Service-Fabric. The service has partitions. Sometimes we get a FabricNotReadableException from this peace of code.
public async Task HandleEvent(EventHandlerMessage message)
{
var queue = await StateManager.GetOrAddAsync<IReliableQueue<EventHandlerMessage>>(EventHandlerServiceConstants.EventHandlerQueueName);
using(ITransaction tx = StateManager.CreateTransaction())
{
await queue.EnqueueAsync(tx, message);
await tx.CommitAsync();
}
}
Does that mean that the partition is down and is being moved? Of that we hit a secondary partition? Because there is also a FabricNotPrimaryException that is being raised in some cases.
I have seen the MSDN link (https://msdn.microsoft.com/en-us/library/azure/system.fabric.fabricnotreadableexception.aspx). But what does
Represents an exception that is thrown when a partition cannot accept reads.
mean? What happened that a partition cannot accept a read?
Under the covers Service Fabric has several states that can impact whether a given replica can safely serve reads and writes. They are:
Granted (you can think of this as normal operation)
Not Primary
No Write Quorum (again mainly impacting writes)
Reconfiguration Pending
FabricNotPrimaryException which you mention can be thrown whenever a write is attempted on a replica which is not currently the Primary, and maps to the NotPrimary state.
FabricNotReadableException maps to the other states (you don't really need to worry or differentiate between them), and can happen in a variety of cases. One example is if the replica you are trying to perform the read on is a "Standby" replica (a replica which was down and which has been recovered, but there are already enough active replicas in the replica set). Another example is if the replica is a Primary but is being closed (say due to an upgrade or because it reported fault), or if it is currently undergoing a reconfiguration (say for example that another replica is being added). All of these conditions will result in the replica not being able to satisfy writes for a small amount of time due to certain safety checks and atomic changes that Service Fabric needs to handle under the hood.
You can consider FabricNotReadableException retriable. If you see it, just try the call again and eventually it will resolve into either NotPrimary or Granted. If you get FabricNotPrimary exception, generally this should be thrown back to the client (or the client in some way notified) that it needs to re-resolve in order to find the current Primary (the default communication stacks that Service Fabric ships take care of watching for non-retriable exceptions and re-resolving on your behalf).
There are two current known issues with FabricNotReadableException.
FabricNotReadableException should have two variants. The first should be explicitly retriable (FabricTransientNotReadableException) and the second should be FabricNotReadableException. The first version (Transient) is the most common and is probably what you are running into, certainly what you would run into in the majority of cases. The second (non-transient) would be returned in the case where you end up talking to a Standby replica. Talking to a standby won't happen with the out of the box transports and retry logic, but if you have your own it is possible to run into it.
The other issue is that today the FabricNotReadableException should be deriving from FabricTransientException, making it easier to determine what the correct behavior is.
Posted as an answer (to asnider's comment - Mar 16 at 17:42) because it was too long for comments! :)
I am also stuck in this catch 22. My svc starts and immediately receives messages. I want to encapsulate the service startup in OpenAsync and set up some ReliableDictionary values, then start receiving message. However, at this point the Fabric is not Readable and I need to split this "startup" between OpenAsync and RunAsync :(
RunAsync in my service and OpenAsync in my client also seem to have different Cancellation tokens, so I need to work around how to deal with this too. It just all feels a bit messy. I have a number of ideas on how to tidy this up in my code but has anyone come up with an elegant solution?
It would be nice if ICommunicationClient had a RunAsync interface that was called when the Fabric becomes ready/readable and cancelled when the Fabric shuts down the replica - this would seriously simplify my life. :)
I was running into the same problem. My listener was starting up before the main thread of the service. I queued the list of listeners needing to be started, and then activated them all early on in the main thread. As a result, all messages coming in were able to be handled and placed into the appropriate reliable storage. My simple solution (this is a service bus listener):
public Task<string> OpenAsync (CancellationToken cancellationToken)
{
string uri;
Start ();
uri = "<your endpoint here>";
return Task.FromResult (uri);
}
public static object lockOperations = new object ();
public static bool operationsStarted = false;
public static List<ClientAuthorizationBusCommunicationListener> pendingStarts = new List<ClientAuthorizationBusCommunicationListener> ();
public static void StartOperations ()
{
lock (lockOperations)
{
if (!operationsStarted)
{
foreach (ClientAuthorizationBusCommunicationListener listener in pendingStarts)
{
listener.DoStart ();
}
operationsStarted = true;
}
}
}
private static void QueueStart (ClientAuthorizationBusCommunicationListener listener)
{
lock (lockOperations)
{
if (operationsStarted)
{
listener.DoStart ();
}
else
{
pendingStarts.Add (listener);
}
}
}
private void Start ()
{
QueueStart (this);
}
private void DoStart ()
{
ServiceBus.WatchStatusChanges (HandleStatusMessage,
this.clientId,
out this.subscription);
}
========================
In the main thread, you call the function to start listener operations:
protected override async Task RunAsync (CancellationToken cancellationToken)
{
ClientAuthorizationBusCommunicationListener.StartOperations ();
...
This problem likely manifested itself here as the bus in question already had messages and started firing the second the listener was created. Trying to access anything in state manager was throwing the exception you were asking about.

Making file picker asynchronous - Windows Phone 8.1

I tried to make File open picker asynchronous using TaskComplectionSource however sometimes I get my application closed with -1 return value, sometimes I get exception like:
[System.Runtime.InteropServices.COMException] = {System.Runtime.InteropServices.COMException (0x80004005): Unspecified error
Unspecified error
at Windows.Storage.Pickers.FileOpenPicker.PickSingleFileAndContinue()
at PhotosGraphos.Mobile.Common.StorageFileExtensions.<PickSingleFileAsyncMobile..
Code:
public static class StorageFileExtensions
{
private static TaskCompletionSource<StorageFile> PickFileTaskCompletionSource;
private static bool isPickingFileInProgress;
public static async Task<StorageFile> PickSingleFileAsyncMobile(this FileOpenPicker openPicker)
{
if (isPickingFileInProgress)
return null;
isPickingFileInProgress = true;
PickFileTaskCompletionSource = new TaskCompletionSource<StorageFile>();
var currentView = CoreApplication.GetCurrentView();
currentView.Activated += OnActivated;
openPicker.PickSingleFileAndContinue();
StorageFile pickedFile;
try
{
pickedFile = await PickFileTaskCompletionSource.Task;
}
catch (TaskCanceledException)
{
pickedFile = null;
}
finally
{
PickFileTaskCompletionSource = null;
isPickingFileInProgress = false;
}
return pickedFile;
}
private static void OnActivated(CoreApplicationView sender, IActivatedEventArgs args)
{
var continuationArgs = args as FileOpenPickerContinuationEventArgs;
sender.Activated -= OnActivated;
if (continuationArgs != null && continuationArgs.Files.Any())
{
StorageFile pickedFile = continuationArgs.Files.First();
PickFileTaskCompletionSource.SetResult(pickedFile);
}
else
{
PickFileTaskCompletionSource.SetCanceled();
}
}
}
What's weird - this bug is hardly reproduced while debugging. Does anyone have any idea what could be reason of that?
Don't do that (don't try to turn Continuation behaviour into async). Why?
Normally when your app is put into the background (for example when you call file picker), it's being suspended, and here is one small pitfall - when you have a debugger attached, your app will work without being suspended. Surely that can cause some troubles.
Note also that when you normally run your app and you fire a picker, then in some cases your app can be terminated (low resources, user closes it ...). So you need here two things which are added by VS as a template: ContinuationManager and SuspensionManager. More you will find at MSDN. At the same link you will find a good procedure to debug your app:
Follow these steps to test the case in which your app is terminated after calling the AndContinue method. These steps ensure that the debugger reattaches to your app after completing the operation and continuing.
In Visual Studio, right-click on your project and select Properties.
In Project Designer, on the Debug tab under Start action, enable Do not launch, but debug my code when it starts.
Run your app with debugging. This deploys the app, but does not run it.
Start your app manually. The debugger attaches to the app. If you have breakpoints in your code, the debugger stops at the breakpoints. When your app calls the AndContinue method, the debugger continues to run.
If your app calls a file picker, wait until you have opened the file provider (for example, Phone, Photos, or OneDrive). If your app calls an online identity provider, wait until the authentication page opens.
On the Debug Location toolbar, in the Process dropdown list, select the process for your app. In the Lifecycle Events dropdown list, select Suspend and Shutdown to terminate your app but leave the emulator running.
After the AndContinue operation completes, the debugger reattaches to your app automatically when the app continues.
I've changed file picker to standard way provided by #Romasz - it still was crashing. I've been debugging it for hours and I get same COMException but sometimes with information provided:
"GetNavigationState doesn't support serialization of a parameter type which was passed to Frame.Navigate"
It seems that code with TaskCompletionSource works and there is nothing wrong with that. I found out in msdn documentation for Frame
Note: The serialization format used by these methods is for internal use only. Your app should not form any dependencies on it. Additionally, this format supports serialization only for basic types like string, char, numeric and GUID types.
And I was passing my model-class object in navigation parameter - so it was kept in navigation stack therefore it couldn't be serialized. The lesson is: do not use non-primitive types for navigation parameter - Frame.Navigate should disallow such navigation and throw exception - but it doesn't..
EDIT:
Another bug - if you bind tapped (let say button tapped) or event like that to command which launch FileOpenPicker you need to check if picker.PickFile.. was called before - otherwise when you tap fast on that button you'll get few calls to picker.PickFile.. and UnauthorizedAccessException will be thrown.

Azure Blob Lease and release

string uri = "myurl";
string blobstatus = GetBlobStatus(uri);
if (blobstatus != LeaseStatus.Locked.ToString())
{
string response = AquireBlob(uri);
//process data.
string abc = ":em";
ReleaseBlob(response, uri);
}
Above is my code for leasing and releasing locks on blob. I'm looking at this method to use for multi-instance worker role where I want to run a specific code after x interval of time, as multiple instances could execute the code at same time.
The problem is that I manage to get the LeaseId properly but when the second instance checks blob lease status it is always unspecified. Why it is so? any clues?
I followed the following link for getting a head start.
Leasing Windows Azure Blobs Using the Storage Client Library - blog.smarx.com
I think your approach should not rely on checking the blob status first and based on that decide whether to acquire lease or not. You should always try and acquire the lease and capture the exception thrown in that process. That way if this code is running in multi-instance environment, only one instance will be able to acquire the lease (and other instances will just throw an error).
Good suggestions.. i solved the problem. found out that in fact that LeaseStatus property is not good and never returns results.
I had to get status by putting in web request and then i could get a right result.

Categories