Azure ML web service times out - c#

I have created a simple experiment in Azure ML and trigger it with an http client. In Azure ML workspace, everything works ok when executed. However, the experiment times out and fails when I trigger the experiment using an http client. Setting a timeout value for the http client does not seem to work.
Is there any way we can set this timeout value so that the experiment does not fail?

Make sure you're setting the client timeout value correctly. If the server powering the web service times out, then it will send back a response with the HTTP status code 504 BackendScoreTimeout (or possibly 409 GatewayTimeout). However, if you simply never receive a response, then your client isn't waiting long enough.
You can find out a good amount of time by running your experiment in ML Studio. Go to the experiment properties to find out how long it ran for, and then aim for about twice that amount of time as a timeout value.

I've had similar problems with an Azure ML experiment published as a web service. Most of the times it was running ok, while sometimes it returned with a timeout error. The problem is that the experiment itself has a 90 seconds running time limit. So, most probably your experiment has a running time over this limit and returns with a timeout error. hth

Looks like it isn't possible to set this timeout based on a feature request that is still marked as "planned" as of 4/1/2018.
The recommendation from MSDN forums from 2017 is to use the Batch Execution Service, which starts the machine learning experiment and then asynchronously asks whether it's done.
Here's a code snippet from the Azure ML Web Services Management Sample Code (all comments are from their sample code):
using (HttpClient client = new HttpClient())
{
var request = new BatchExecutionRequest()
{
Outputs = new Dictionary<string, AzureBlobDataReference> () {
{
"output",
new AzureBlobDataReference()
{
ConnectionString = storageConnectionString,
RelativeLocation = string.Format("{0}/outputresults.file_extension", StorageContainerName) /*Replace this with the location you would like to use for your output file, and valid file extension (usually .csv for scoring results, or .ilearner for trained models)*/
}
},
},
GlobalParameters = new Dictionary<string, string>() {
}
};
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", apiKey);
// WARNING: The 'await' statement below can result in a deadlock
// if you are calling this code from the UI thread of an ASP.Net application.
// One way to address this would be to call ConfigureAwait(false)
// so that the execution does not attempt to resume on the original context.
// For instance, replace code such as:
// result = await DoSomeTask()
// with the following:
// result = await DoSomeTask().ConfigureAwait(false)
Console.WriteLine("Submitting the job...");
// submit the job
var response = await client.PostAsJsonAsync(BaseUrl + "?api-version=2.0", request);
if (!response.IsSuccessStatusCode)
{
await WriteFailedResponse(response);
return;
}
string jobId = await response.Content.ReadAsAsync<string>();
Console.WriteLine(string.Format("Job ID: {0}", jobId));
// start the job
Console.WriteLine("Starting the job...");
response = await client.PostAsync(BaseUrl + "/" + jobId + "/start?api-version=2.0", null);
if (!response.IsSuccessStatusCode)
{
await WriteFailedResponse(response);
return;
}
string jobLocation = BaseUrl + "/" + jobId + "?api-version=2.0";
Stopwatch watch = Stopwatch.StartNew();
bool done = false;
while (!done)
{
Console.WriteLine("Checking the job status...");
response = await client.GetAsync(jobLocation);
if (!response.IsSuccessStatusCode)
{
await WriteFailedResponse(response);
return;
}
BatchScoreStatus status = await response.Content.ReadAsAsync<BatchScoreStatus>();
if (watch.ElapsedMilliseconds > TimeOutInMilliseconds)
{
done = true;
Console.WriteLine(string.Format("Timed out. Deleting job {0} ...", jobId));
await client.DeleteAsync(jobLocation);
}
switch (status.StatusCode) {
case BatchScoreStatusCode.NotStarted:
Console.WriteLine(string.Format("Job {0} not yet started...", jobId));
break;
case BatchScoreStatusCode.Running:
Console.WriteLine(string.Format("Job {0} running...", jobId));
break;
case BatchScoreStatusCode.Failed:
Console.WriteLine(string.Format("Job {0} failed!", jobId));
Console.WriteLine(string.Format("Error details: {0}", status.Details));
done = true;
break;
case BatchScoreStatusCode.Cancelled:
Console.WriteLine(string.Format("Job {0} cancelled!", jobId));
done = true;
break;
case BatchScoreStatusCode.Finished:
done = true;
Console.WriteLine(string.Format("Job {0} finished!", jobId));
ProcessResults(status);
break;
}
if (!done) {
Thread.Sleep(1000); // Wait one second
}
}
}

Related

How to start and stop schedule webjob using asp .net c#

I have created a trigger web job for some web application call, but I got a requirement that trigger job customer should be able to start or stop when want, and can also schedule.
I have done the schedule part using webjobs rest API but I am not able to complete start and stop work as trigger job does not have any option.
Is there a way to start and stop trigger webjob. I have tried to use kudo kill process but after webjob trigger completes it does not appear in process explorer.
var base64Auth = Convert.ToBase64String(Encoding.Default.GetBytes($"{azureUserName}:{azurePassword}"));
if (string.Equals(jobStatus, "STOP", StringComparison.InvariantCultureIgnoreCase))
{
using(var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("Authorization", "Basic " + base64Auth);
//var baseUrl = new Uri($"https://{webAppName}.scm.azurewebsites.net/");
var requestURl = string.Format("{0}/api/processes", azureWebAppUrl);
var response = client.GetAsync(requestURl).Result;
if (response.IsSuccessStatusCode == true)
{
string res = response.Content.ReadAsStringAsync().Result;
var processes = JsonConvert.DeserializeObject < List < ProcessData >> (res);
if (processes.Any(x => x.name.Contains(azureWebJobName)))
{
requestURl = string.Format("{0}/api/processes/{1}", azureWebAppUrl, processes.Where(x => x.name.Contains(azureWebJobName)).FirstOrDefault().id);
response = client.DeleteAsync(requestURl).Result;
}
returnStr = "success";
}
else
{
returnStr = response.Content.ToString();
}
}
}
Please help me to understand a better way to stop and start the trigger web job process. I also found that we can add app settings to the main application like WEBJOBS_STOPPED and WEBJOBS_DISABLE_SCHEDULE but it will depend and need to update it every time, I want to rely on webjob setting completely instead of main application settings.
I do not know if I understand your problem or not.
I think, You can use Quartz(1) for start job/jobs and schedule it with application setting.
https://www.quartz-scheduler.net/

Issue With HttpClient Bulk Parallel Request in .Net Core C#

So I have been struggling with this issue for like 3 weeks. Here's what I want to do.
So I have like 2000 stock options. I want to fetch 5 of them at a time and process but it all has to be parallel. I'll write them in steps to make it more clear.
Get 5 stock symbols from an array
Send it to fetch its data and process. Don't wait for a response keep on processing.
wait 2.6 seconds (as we are limited to 120 API requests per minute so this delay helps in keeping it throttled to 115 per minute)
Goto 1
All the steps above have to be parallel. I have written the code for it and it all seems to be working fine but randomly it crashes saying
"A connection attempt failed because the connected party did not
properly respond after a period of time, or established connection
failed because connected host has failed to respond".
And sometimes it'll never happen and everything works like a charm.
This error is very random. It could show up on maybe 57th stock or maybe at 1829th stock. I have used HttpClient for it. I have tested this same scenario using Angular and creating custom requests and it never crashes there so it's not third-party server's fault.
What I have already done:
Changed HttpClient class usage from new instances every time to a single instance for the whole project.
Increases Service point manager Connection limit to a different number. (Default for .net core is 2)
Instead of HttpClient Queuing I have used SemaphoreSlim for queue and short-circuiting.
Forced ConnectionLeaseTimeout to 40 seconds to detect DNS changes if any.
Changed Async Tasks to threading.
Tried almost everything from the internet.
My doubts:
I doubt that it has something to do with the HttpClient class. I have read a lot of bad things about its misleading documentation etc.
My friend's doubt:
He said it could be because of concurrent tasks and I should change it to threads.
Here's the code:
// Inside Class Constructor
private readonly HttpClient HttpClient = new HttpClient();
SetMaxConcurrency(ApiBaseUrl, maxConcurrentRequests);
// SetMaxConcurrency function
private void SetMaxConcurrency(string url, int maxConcurrentRequests)
{
ServicePointManager.FindServicePoint(new Uri(url)).ConnectionLimit = maxConcurrentRequests;
ServicePointManager.FindServicePoint(new Uri(url)).ConnectionLeaseTimeout = 40*1000;
}
// code for looping through chunks of symbol each chunk has 5 symbols/stocks in it
foreach(var chunkedSymbol in chunkedSymbols)
{
//getting o auth token
string AuthToken = await OAuth();
if(String.IsNullOrEmpty(AuthToken))
{
throw new ArgumentNullException("Access Token is null!");
}
processingSymbols += chunkSize;
OptionChainReq.symbol = chunkedSymbol.ToArray();
async Task func()
{
//function that makes request
var response = await GetOptionChain(AuthToken, ClientId, OptionChainReq);
// concat the result in main list
appResponses = appResponses.Concat(response).ToList();
}
// if request reaches 115 process the remaning requests first
if(processingSymbols >= 115)
{
await Task.WhenAll(tasks);
processingSymbols = 0;
}
tasks.Add(func());
// 2600 millisecond delay to wait for all the data to process
await Task.Delay(delay);
}
//once the loop is completed process the remaining requests
await Task.WhenAll(tasks);
// This code processes every symbol. this code is inside GetOptionChain()
try{
var tasks = new List<Task>();
foreach (string symbol in OptionChainReq.symbol)
{
List<KeyValuePair<string, string>> Params = new List<KeyValuePair<string, string>>();
string requestParams = string.Empty;
// Converting Request Params to Key Value Pair.
Params.Add(new KeyValuePair<string, string>("apikey" , ClientId));
// URL Request Query parameters.
requestParams = new FormUrlEncodedContent(Params).ReadAsStringAsync().Result;
string endpoint = ApiBaseUrl + "/marketdata/chains?";
HttpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", OAuthToken);
Uri tosUri = new Uri(endpoint + requestParams, UriKind.Absolute);
async Task func()
{
try{
string responseString = await GetTosData(tosUri);
OptionChainResponse OptionChainRes = JsonConvert.DeserializeObject<OptionChainResponse>(responseString);
var mappedOptionAppRes = MapOptionsAppRes( OptionChainRes );
if(mappedOptionAppRes != null)
{
OptionsData.Add( mappedOptionAppRes );
}
}
catch(Exception ex)
{
throw new Exception("Crashed");
}
}
// asyncronusly processing each request
tasks.Add(func());
}
//making sure all 5 requests are processed
await Task.WhenAll(tasks);
}
catch (Exception ex)
{
failedSymbols += " "+ string.Join(",", OptionChainReq.symbol);
}
// The code below is for individual request
public async Task<string> GetTosData(Uri url)
{
try
{
await semaphore.WaitAsync();
if (IsTripped())
{
return UNAVAILABLE;
}
var response = await HttpClient.GetAsync(url);
if(response.StatusCode == System.Net.HttpStatusCode.Unauthorized)
{
string OAuthToken = await OAuth();
HttpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", OAuthToken);
return await GetTosData(url);
}
else if(response.StatusCode != HttpStatusCode.OK)
{
TripCircuit(reason: $"Status not OK. Status={response.StatusCode}");
return UNAVAILABLE;
}
return await response.Content.ReadAsStringAsync();
}
catch(Exception ex) when (ex is OperationCanceledException || ex is TaskCanceledException)
{
Console.WriteLine("Timed out");
TripCircuit(reason: $"Timed out");
return UNAVAILABLE;
}
finally
{
semaphore.Release();
}
}

Azure Blob Storage DownloadToStreamAsync hangs during network change

I've been having an issue with the Microsoft.WindowsAzure.Storage v9.3.3 and Microsoft.Azure.Storage.Blob v11.1.0 NuGet libraries. Specifically when download a large file. If you change your network during the "DownloadToStreamAsync" method the call hangs. I've been seeing my code, which processes a lot of files, hang occasionally and I've been trying to narrow it down. I think the network change might be a reliable way of triggering some failure in the Azure Blob Storage Libraries.
More info about the issue;
When I unplug my network cable my computer switches to WiFi but the request never resumes
If I start the download on WiFi and then plug in my network cable the same error occurs
The “ServerTimeout” property never fails the request or acts as expected in accordance to the Documentation
The “MaximumExecutionTime” property does fail the request but we don’t want to limit ourselves to a certain time period, especially because we’re dealing with large files
The following code fails 100% of the time if the network is changed during the call.
static void Main(string[] args)
{
try
{
CloudStorageAccount.TryParse("<Connection String>", out var storageAccount);
var cloudBlobClient = storageAccount.CreateCloudBlobClient();
var container = cloudBlobClient.GetContainerReference("<Container Reference>");
var blobRef = container.GetBlockBlobReference("Large Text.txt");
Stream memoryStream = new MemoryStream();
BlobRequestOptions optionsWithRetryPolicy = new BlobRequestOptions() { ServerTimeout = TimeSpan.FromSeconds(5), RetryPolicy = new LinearRetry(TimeSpan.FromSeconds(20), 4) };
blobRef.DownloadToStreamAsync(memoryStream, null, optionsWithRetryPolicy, null).GetAwaiter().GetResult();
Console.WriteLine("Completed");
}
catch (Exception ex)
{
Console.WriteLine($"Exception: {ex.Message}");
}
finally
{
Console.WriteLine("Finished");
}
}
I've found this active issue in the Azure Storage GitHub but it seems inactive.
Is there any other approach I could take to reliably and efficiently download a blob or something I'm missing when using this package?
Thanks to Mohit for the suggestion.
Create a Task to check the stream length in the background
If the stream hasn't increased in a set period of time, cancel the DownloadToStreamAsync
DISCLAIMER: I haven't written tests around this code or how to make it run in a performant way as you couldn't have a wait like this for every file you process. I might need to cancel the initial task if the download completes, I don't know yet, I just wanted to get it working first. I don't deem it production ready.
// Create download cancellation token
var downloadCancellationTokenSource = new CancellationTokenSource();
var downloadCancellationToken = downloadCancellationTokenSource.Token;
var completedChecking = false;
// A background task to confirm the download is still progressing
Task.Run(() =>
{
// Allow the download to start
Task.Delay(TimeSpan.FromSeconds(2)).GetAwaiter().GetResult();
long currentStreamLength = 0;
var currentRetryCount = 0;
var availableRetryCount = 5;
// Keep the checking going during the duration of the Download
while (!completedChecking)
{
Console.WriteLine("Checking");
if (currentRetryCount == availableRetryCount)
{
Console.WriteLine($"RETRY WAS {availableRetryCount} - FAILING TASK");
downloadCancellationTokenSource.Cancel();
completedChecking = true;
}
if (currentStreamLength == memoryStream.Length)
{
currentRetryCount++;
Console.WriteLine($"Length has not increased. Incremented Count: {currentRetryCount}");
Task.Delay(TimeSpan.FromSeconds(10)).GetAwaiter().GetResult();
}
else
{
currentStreamLength = memoryStream.Length;
Console.WriteLine($"Download in progress: {currentStreamLength}");
currentRetryCount = 0;
Task.Delay(TimeSpan.FromSeconds(1)).GetAwaiter().GetResult();
}
}
});
Console.WriteLine("Starting Download");
blobRef.DownloadToStreamAsync(memoryStream, downloadCancellationToken).GetAwaiter().GetResult();
Console.WriteLine("Completed Download");
completedChecking = true;
Console.WriteLine("Completed");

azure functions running in sequence, parallel desired

I have an azure function that I'm calling in parallel using postasync...
I arrange all my tasks in a queue and then wait for the responses in parallel using "WhenAll".
I can confirm that there is a burst of HTTP activity out to Azure and then HTTP activity stops on my local machine while I wait for responses from Azure.
When I monitor the function in Azure Portal, it looks like the requests are arriving every three seconds or so, even though from my side there is no network traffic after the initial burst.
When I get my results back, they are arriving in sequence, in the exact same order I sent them out, even though the Azure Portal monitor indicates that some functions take 10 seconds to run and some take 3 seconds to run.
I am using Azure functions Version 1 with a consumption service plan.
CentralUSPlan (Consumption: 0 Small)
My host.json file is empty ==> {}
Why is this happening? Is there some setting that is required to get azure functions to execute in parallel?
public async Task<List<MyAnalysisObject>> DoMyAnalysisObjectsHttpRequestsAsync(List<MyAnalysisObject> myAnalysisObjectList)
{
List<MyAnalysisObject> evaluatedObjects = new List<MyAnalysisObject>();
using (var client = new HttpClient())
{
var tasks = new List<Task<MyAnalysisObject>>();
foreach (var myAnalysisObject in myAnalysisObjectList)
{
tasks.Add(DoMyAnalysisObjectHttpRequestAsync(client, myAnalysisObject));
}
var evaluatedObjectsArray = await Task.WhenAll(tasks);
evaluatedObjects.AddRange(evaluatedObjectsArray);
}
return evaluatedObjects;
}
public async Task<MyAnalysisObject> DoMyAnalysisObjectHttpRequestAsync(HttpClient client, MyAnalysisObject myAnalysisObject)
{
string requestJson = JsonConvert.SerializeObject(myAnalysisObject);
Console.WriteLine("Doing post-async:" + myAnalysisObject.Identifier);
var response = await client.PostAsync(
"https://myfunctionapp.azurewebsites.net/api/BuildMyAnalysisObject?code=XXX",
new StringContent(requestJson, Encoding.UTF8, "application/json")
);
Console.WriteLine("Finished post-async:" + myAnalysisObject.Identifier);
var result = await response.Content.ReadAsStringAsync();
Console.WriteLine("Got result:" + myAnalysisObject.Identifier);
return JsonConvert.DeserializeObject<MyAnalysisObject>(result);
}

Let code run for X seconds, after that stop

I'm working on Windows Phone 7.1 app and want for several lines of code run for 10 seconds, if succeeds in 10 seconds, continue, if no success, stop the code and display message.
The thing is, my code is not a loop - phone tries to fetch data from a server (if internet connection is slow, might take too long).
if (DeviceNetworkInformation.IsNetworkAvailable)
{
// start timer here for 10s
WebClient webClient = new WebClient();
webClient.DownloadStringCompleted += loginHandler;
webClient.DownloadStringAsync(new Uri(string.Format(url + "?loginas=" + login + "&pass=" + pwd)));
// if 10s passed, stop code above and display MessageBox
}
You can use something like the following:
HttpClient client = new HttpClient();
var cts = new CancellationTokenSource();
cts.CancelAfter(10000);
try
{
var response = await client.GetAsync(new Uri(string.Format(url +
"?loginas=" + login + "&pass=" + pwd)), cts.Token);
var result = await response.Content.ReadAsStringAsync();
// work with result
}
catch(TaskCanceledException)
{
// error/timeout handling
}
You need the follwoing NuGet packages:
HttpClient
async/await
Make that piece of code a method, make that method run separately.
Launch a Timer, when 10 seconds elapsed, check the status of the first thread.
If it has fetched all that he was supposed to, make use of that, otherwise kill the thread, clean whatever you have to clean and return that message of error.

Categories