Issue with Azure Batch Job, ending with unkown error code - c#

I'm having problems with Azure Batch Jobs. I'm trying to create an app pool, create a CloudTask and then execute my Application Package being online.
Do you see something not working correctly?
Here is the code used now. Main code:
await CreatePoolAsync(batchClient, currentPoolId, applicationFiles);
await CreateJobAsync(batchClient, currentJobId, currentPoolId);
await AddTasksAsync(batchClient, currentJobId, inputFiles, optimizationId, outputContainerSasUrl);
Creating the Pool:
CloudPool pool = batchClient.PoolOperations.CreatePool(
poolId: poolId,
targetDedicated: 1, // 3 compute nodes
virtualMachineSize: "small", // single-core, 1.75 GB memory, 225 GB disk
cloudServiceConfiguration: new CloudServiceConfiguration(osFamily: "4")); // Windows Server 2012 R2
pool.ApplicationPackageReferences = new List<ApplicationPackageReference>
{
new ApplicationPackageReference
{
ApplicationId = "my_app"
}
};
Creating the Job:
CloudJob job = batchClient.JobOperations.CreateJob();
job.Id = jobId;
job.PoolInformation = new PoolInformation {PoolId = poolId};
await job.CommitAsync();
And adding the Task.
string taskId = "myAppEngineTask";
string taskCommandLine = $"cmd /c %AZ_BATCH_APP_PACKAGE_MY_APP%\\MyApp.Console.exe -a NSGA2 -r 1000 -m db -i {optimizationId}";
CloudTask task = new CloudTask(taskId, taskCommandLine);
task.ApplicationPackageReferences = new List<ApplicationPackageReference>
{
new ApplicationPackageReference
{
ApplicationId = "my_app"
}
};
await batchClient.JobOperations.AddTaskAsync(jobId, tasks);
When done with adding tasks, everything seems to be up and running, but I get error code: -2146232576 and nothing is printed to any logs.

To diagnose failures for tasks, you will want to first see if the CloudTask ExecutionInformation.FailureInformation (if SDK 7.0.0+ or ExecutionInformation.SchedulingError if prior SDK version) is set. Examine those fields for any information.
For your particular task, it looks like it could be related to you adding a task-level Application package reference when you have already done that at the pool-level. Try omitting the task.ApplicationPackageReferences.
Consult the Application Package Documentation for more information regarding the difference between Pool-level and Task-level application packages and which one would suit your scenario the best.

Related

UploadFromStreamAsync Cancellation Token not working

Expected behaviour:
Looking at my internet usage in task manager after running should see a spike in upload for around 5 seconds and then a drop back to normal levels.
Result:
Upload speed spikes for a lot longer (closer to a minute or more, indicative of the full file being uploaded)
Tried:
Cancelling after multiple times (e.g. 1 second, 10 seconds etc)
Immediately cancelling with the token after starting the upload Using
UploadFromByteArrayAsync() instead of UploadFromStreamAsync() Using
BeginUploadFromStream() with EndUploadFromStream()
Although I can quite easily cancel a download using the CancellationToken, no matter what I do, I can't cancel this upload. Also, weirdly, searching online, I can't find any instance of anyone else having problems cancelling an upload.
_connectionString = "xxx";
if (_connectionString != "")
{
_storageAccount = CloudStorageAccount.Parse(_connectionString);
_blobClient = _storageAccount.CreateCloudBlobClient();
}
string ulContainerName = "speedtest";
string ulBlobName = "uploadTestFile" + DateTime.UtcNow.ToLongTimeString();
CloudBlobContainer container = _blobClient.GetContainerReference(ulContainerName);
CloudBlockBlob ulBlockBlob = container.GetBlockBlobReference(ulBlobName);
CreateDummyDataAsync(_fileUploadSizeMB);
byte[] byteArray = System.IO.File.ReadAllBytes(_filePath + "dummy_upload");
ulBlockBlob.UploadFromStreamAsync(new MemoryStream(byteArray), _ulCancellationTokenSource.Token);
_ulCancellationTokenSource.CancelAfter(5000);
To anyone that ends up in this situation and can't get the cancellationToken to work... the workaround I eventually used was
BlobRequestOptions timeoutRequestOptions = new BlobRequestOptions()
{
// Allot 10 seconds for this API call, including retries
MaximumExecutionTime = TimeSpan.FromSeconds(10)
};
Then include the timeoutRequestOptions in the method arguments:
ulBlockBlob.UploadFromStreamAsync(new MemoryStream(byteArray), new AccessCondition(),
timeoutRequestOptions,
new OperationContext(),
new progressHandler(), cancellationToken.Token);
This will force the API call to timeout after a certain time.

Set WaitTimeOut for Azure Service Bus Session Processor

In the legacy version of Azure Service Bus (ASB) I can use MessageWaitTimeout in SessionHandlerOptions to control the timeout between 2 messages. For example, if I set timeout 5 seconds, after complete the first message, the queue waits for 5s then picks the next message.
In the new version Azure.Messaging.ServiceBus, the queue has to wait for around 1 minute to pick up the next message. I only need to process one-by-one messages, no need to process concurrent messages.
I follow this example and can't find any solution to set timeout like the old version.
Does anyone know how to do it?
var options = new ServiceBusSessionProcessorOptions
{
AutoCompleteMessages = false,
MaxConcurrentSessions = 1,
MaxConcurrentCallsPerSession = 1,
MaxAutoLockRenewalDuration = TimeSpan.FromMinutes(2),
};
EDIT:
I found the solution. It is RetryOptions in ServiceBusClient
var client = new ServiceBusClient("connectionString", new ServiceBusClientOptions
{
RetryOptions = new ServiceBusRetryOptions
{
TryTimeout = TimeSpan.FromSeconds(5)
}
});
With the latest stable release, 7.2.0, this can be configured with the SessionIdleTimeout property.

Unable to upload JSON file to Azure Cosmos DB due to large file size

I tried to upload a JSON file containing a list of around 5000 JSONs to Azure Cosmos dB with Azure Migration Tool and was able to do that. It uploaded all 5000 items.
However when I'm trying to do the same from a .NET application, using the following code, it's not uploading and the Azure portal is giving an error message.
Code:
public static async Task BulkImport()
{
string json = File.ReadAllText(#"C:\Temp.json");
List<StudentInfo> lists = JsonConvert.DeserializeObject<List<StudentInfo>>(json);
CosmosClientOptions options = new CosmosClientOptions() { ConnectionMode = ConnectionMode.Gateway, AllowBulkExecution = true };
CosmosClient cosmosClient = new CosmosClient(EndpointUrl, AuthorizationKey, options);
try
{
Database database = await cosmosClient.CreateDatabaseIfNotExistsAsync(DatabaseName);
Console.WriteLine(database.Id);
Container container = await database.CreateContainerIfNotExistsAsync(ContainerName, "/id");
Console.WriteLine(container.Id);
List<Task> tasks = new List<Task>();
foreach (StudentInfo item in lists)
{
tasks.Add(container.CreateItemAsync(item, new PartitionKey(item.id))
.ContinueWith((Task<ItemResponse<FunctionInfo>> task) =>
{
Console.WriteLine("Status: " + task.Result.StatusCode + " Resource: " + task.Result.Resource.id);
}));
}
await Task.WhenAll(tasks);
}
catch (Exception ex)
{
Console.WriteLine("Exception = " + ex.Message);
}
}
Message :
I tried running the code with the list containing only 100 JSONs and it's working fine!
Please help me regarding this. Thanks in advance!
It is not an error. It is just a warning. You were trying to create documents with too many threads, which consumes too many RUs.
The Azure CosmosDB API probably implements Throttling pattern. So, when you hit the limitation, your request will be throttled.
Azure system also monitored this event, which gives you the notification on the portal. You may check the RUs you used in metrics pages. And you may increase the throughput to increase concurrency.
But, (if you do not want to increase throughput), you may consider to:
Upload in batches to reduce concurrency.
If your requests were throttled, add wait and retry policy in your code.

How to force hangfire server to remove old server data for that particular server on restart?

I am showing list of hangfire servers currently running on my page.
I am running hangfire server in console application but the problem is when I don't have my console application running still hangfire api returns hangfire servers.
Moreover when I run my console application multiple times I get 3-4 hangfire servers though I have only 1 hangfire server running in console application.
Mvc application :
IMonitoringApi monitoringApi = JobStorage.Current.GetMonitoringApi();
var servers = monitoringApi.Servers().OrderByDescending(s => s.StartedAt);
Console Application:Hangfire server
public static void Main(string[] args)
{
var sqlServerPolling = new SqlServerStorageOptions
{
QueuePollInterval = TimeSpan.FromSeconds(20) // Default value
};
GlobalConfiguration.Configuration.UseSqlServerStorage("ConnectionString", sqlServerPolling);
// Set automatic retry attempt
GlobalJobFilters.Filters.Add(new AutomaticRetryAttribute { Attempts = 0 });
// Set worker count
var options = new BackgroundJobServerOptions
{
WorkerCount = 1,
};
using (var server = new BackgroundJobServer(options))
{
Console.WriteLine("Hangfire Server1 started. Press any key to exit...");
Console.ReadKey();
}
}
Hangfire server doenst automatically remove old server data whenever I run my console application again for that particular server?
I will appreciate any help :)
I dug through the source code to find:
IMonitoringApi monitoringApi = JobStorage.Current.GetMonitoringApi();
var serverToRemove = monitoringApi.Servers().First(); //<-- adjust query as needed
JobStorage.Current.GetConnection().RemoveServer(serverToRemove.Name)
If you want to see the code yourself, here are the related source code files:
Mapping of db server.Id
Background server announcement
Delete server from db with id
Code to generate server id
Via the last link, it's also clear that you can customize your server name to make it easier to find and remove:
var options = new BackgroundJobServerOptions
{
WorkerCount = 1,
ServerName = "removeMe",
};
// ....
IMonitoringApi monitoringApi = JobStorage.Current.GetMonitoringApi();
var serverToRemove = monitoringApi.Servers().First(svr => srv.Name.Contains("removeMe"));
JobStorage.Current.GetConnection().RemoveServer(serverToRemove.Name);
Follow the code to remove duplicate in the same server.
//Start Hangfire Server
var varJobOptions = new BackgroundJobServerOptions();
varJobOptions.ServerName = "job.fiscal.io";
varJobOptions.WorkerCount = Environment.ProcessorCount * 10;
app.UseHangfireServer(varJobOptions);
app.UseHangfireDashboard("/jobs", new DashboardOptions {
Authorization = new[] { new clsHangFireAuthFilter() }
});
//Remove Duplicte HangFire Server
var varMonitoringApi = JobStorage.Current.GetMonitoringApi();
var varServerList = varMonitoringApi.Servers().Where(r => r.Name.Contains("job.fiscal.io"));
foreach( var varServerItem in varServerList) {
JobStorage.Current.GetConnection().RemoveServer(varServerItem.Name);
}

Hanging process when run with .NET Process.Start -- what's wrong?

I wrote a quick and dirty wrapper around svn.exe to retrieve some content and do something with it, but for certain inputs it occasionally and reproducibly hangs and won't finish. For example, one call is to svn list:
svn list "http://myserver:84/svn/Documents/Instruments/" --xml --no-auth-cache --username myuser --password mypassword
This command line runs fine when I just do it from a command shell, but it hangs in my app. My c# code to run this is:
string cmd = "svn.exe";
string arguments = "list \"http://myserver:84/svn/Documents/Instruments/\" --xml --no-auth-cache --username myuser --password mypassword";
int ms = 5000;
ProcessStartInfo psi = new ProcessStartInfo(cmd);
psi.Arguments = arguments;
psi.RedirectStandardOutput = true;
psi.WindowStyle = ProcessWindowStyle.Normal;
psi.UseShellExecute = false;
Process proc = Process.Start(psi);
StreamReader output = new StreamReader(proc.StandardOutput.BaseStream, Encoding.UTF8);
proc.WaitForExit(ms);
if (proc.HasExited)
{
return output.ReadToEnd();
}
This takes the full 5000 ms and never finishes. Extending the time doesn't help. In a separate command prompt, it runs instantly, so I'm pretty sure it's unrelated to an insufficient waiting time. For other inputs, however, this seems to work fine.
I also tried running a separate cmd.exe here (where exe is svn.exe and args is the original arg string), but the hang still occurred:
string cmd = "cmd";
string arguments = "/S /C \"" + exe + " " + args + "\"";
What could I be screwing up here, and how can I debug this external process stuff?
EDIT:
I'm just now getting around to addressing this. Mucho thanks to Jon Skeet for his suggestion, which indeed works great. I have another question about my method of handling this, though, since I'm a multi-threaded novice. I'd like suggestions on improving any glaring deficiencies or anything otherwise dumb. I ended up creating a small class that contains the stdout stream, a StringBuilder to hold the output, and a flag to tell when it's finished. Then I used ThreadPool.QueueUserWorkItem and passed in an instance of my class:
ProcessBufferHandler bufferHandler = new ProcessBufferHandler(proc.StandardOutput.BaseStream,
Encoding.UTF8);
ThreadPool.QueueUserWorkItem(ProcessStream, bufferHandler);
proc.WaitForExit(ms);
if (proc.HasExited)
{
bufferHandler.Stop();
return bufferHandler.ReadToEnd();
}
... and ...
private class ProcessBufferHandler
{
public Stream stream;
public StringBuilder sb;
public Encoding encoding;
public State state;
public enum State
{
Running,
Stopped
}
public ProcessBufferHandler(Stream stream, Encoding encoding)
{
this.stream = stream;
this.sb = new StringBuilder();
this.encoding = encoding;
state = State.Running;
}
public void ProcessBuffer()
{
sb.Append(new StreamReader(stream, encoding).ReadToEnd());
}
public string ReadToEnd()
{
return sb.ToString();
}
public void Stop()
{
state = State.Stopped;
}
}
This seems to work, but I'm doubtful that this is the best way. Is this reasonable? And what can I do to improve it?
One standard issue: the process could be waiting for you to read its output. Create a separate thread to read from its standard output while you're waiting for it to exit. It's a bit of a pain, but that may well be the problem.
Jon Skeet is right on the money!
If you don't mind polling after you launch your svn command try this:
Process command = new Process();
command.EnableRaisingEvents = false;
command.StartInfo.FileName = "svn.exe";
command.StartInfo.Arguments = "your svn arguments here";
command.StartInfo.UseShellExecute = false;
command.StartInfo.RedirectStandardOutput = true;
command.Start();
while (!command.StandardOutput.EndOfStream)
{
Console.WriteLine(command.StandardOutput.ReadLine());
}
I had to drop an exe on a client's machine and use Process.Start to launch it.
The calling application would hang - the issue ended up being their machine assuming the exe was dangerous and preventing other applications from starting it.
Right click the exe and go to properties. Hit "Unblock" toward the bottom next to the security warning.
Based on Jon Skeet's answer this is how I do it in modern day (2021) .NET 5
var process = Process.Start(processStartInfo);
var stdErr = process.StandardError;
var stdOut = process.StandardOutput;
var resultAwaiter = stdOut.ReadToEndAsync();
var errResultAwaiter = stdErr.ReadToEndAsync();
await process.WaitForExitAsync();
await Task.WhenAll(resultAwaiter, errResultAwaiter);
var result = resultAwaiter.Result;
var errResult = errResultAwaiter.Result;
Note that you can't await the standard output before the error, because the wait will hang in case the standard error buffer gets full first (same for trying it the other way around).
The only way is to start reading them asynchronously, wait for the process to exit, and then complete the await by using Task.WaitAll
I know this is an old post but maybe this will assist someone. I used this to execute some AWS (Amazon Web Services) CLI commands using .Net TPL tasks.
I did something like this in my command execution which is executed within a .Net TPL Task which is created within my WinForm background worker bgwRun_DoWork method which holding a loop with while(!bgwRun.CancellationPending). This contains the reading of the Standard Output from the Process via a new Thread using the .Net ThreadPool class.
private void bgwRun_DoWork(object sender, DoWorkEventArgs e)
{
while (!bgwRun.CancellationPending)
{
//build TPL Tasks
var tasks = new List<Task>();
//work to add tasks here
tasks.Add(new Task(()=>{
//build .Net ProcessInfo, Process and start Process here
ThreadPool.QueueUserWorkItem(state =>
{
while (!process.StandardOutput.EndOfStream)
{
var output = process.StandardOutput.ReadLine();
if (!string.IsNullOrEmpty(output))
{
bgwRun_ProgressChanged(this, new ProgressChangedEventArgs(0, new ExecutionInfo
{
Type = "ExecutionInfo",
Text = output,
Configuration = s3SyncConfiguration
}));
}
if (cancellationToken.GetValueOrDefault().IsCancellationRequested)
{
break;
}
}
});
});//work Task
//loop through and start tasks here and handle completed tasks
} //end while
}
I know my SVN repos can run slow sometimes, so maybe 5 seconds isn't long enough? Have you copied the string you are passing to the process from a break point so you are positive it's not prompting you for anything?

Categories