VSTS Update build via REST API - c#

My goal is to update the build's pool information to move the queued build/build into another pool via the REST API. I have tried a lot of things and cannot find any documentation - not even on which parameters can actually be set
Code I have tried to accomplish this task with:
try
{
build.Queue.Id = newQueue.Id;
build.Queue.Name = newQueue.Name;
build.Queue.Pool = new Microsoft.TeamFoundation.Build.WebApi.TaskAgentPoolReference();
build.Queue.Pool.Name = newQueue.Pool.Name;
build.Queue.Pool.Id = newQueue.Pool.Id;
build.Queue.Pool.IsHosted = newQueue.Pool.IsHosted;
var c = connection.GetBuildClient();
var tf = await c.UpdateBuildAsync(build);
return true;
}
catch (Exception ex)
{
return false;
}
(the above code is very hacky as I am attempting to make it work)
Things I have tried:
1) I have tried copying the exact json and sending it via a raw patch request, then I get a response saying that it is modified. But NOTHING is modified except the last modified user changing to me
2) I have tried editing the AgentsPoolQueue in the request body via the API, but it is not the pool I want to change - but the Build's pool information to link to another build instead.

update the build's pool information to move the queued build/build
into another pool via the REST API
After testing, updating the queued build's agent pool through the rest API is currently not supported in Azure Devops . Once build is run, its agent pool information cannot be modified.
Although the AgentPoolQueue is recorded in the request body in the Update-build rest api document. However, when you actually use it, you will find that the return status is 200 ok, but the pool information in the build has not actually been updated. This is not stated in the documentation and it does cause confusion.
The agent pool is determined when you run pipeline. Once the build is running, even in the queue state, it cannot be changed. You could submit your request for this feature on our UserVoice site, which is our main forum for product suggestions. More votes and comments can increase the priority of feedback.
At present, you can only cancel the queued builds, run new builds, and re-specify the agent pool in the new builds.

You need to update the Build using the existing Build id
public async Task<Build> UpdateBuildAsync(Build build, string id)
{
var updateBuild = await Repository.GetBuildAsync(id);
if (updateBuild != null)
{
updateBuild.Timestamp = DateTime.Now;
updateBuild.Status = build.Status;
updateBuild.Description = build.Description;
if (build.Status == (int)BuildStatus.BuildQueued)
{
updateBuild.VSTSBuildId = build.VSTSBuildId;
}
if (build.Status == (int)BuildStatus.DeploymentQueued)
{
updateBuild.TemplateParameterUri = build.TemplateParameterUri;
updateBuild.TemplateUri = build.TemplateUri;
}
updateBuild.PkgURL = build.PkgURL;
await Repository.UpdateBuildAsync(updateBuild);
return await Repository.GetBuildAsync(id);
}
return updateBuild;
}

Related

Threads increase abnormally in linux service

I have a service that runs in linux under SystemD but gets compiled and debugged in VS22 under Windows.
The service is mainly a proxy to a MariaDB10 database shaped as a BackgroundWorker serving clients via SignalR.
If I run it in relase mode on Windows, the number of logical threads remains in a reasonable value (20-25 approx). See pic below.
Under linux, after few minutes (i cannot give you more insight unfortuantely... i still have to figure out what could be changing) the number of threads start increasing constantly every second.
see pic here arriving already to more than 100 and still counting:
Reading current logical threads increasing / thread stack is leaking i got confirmed that the CLR is allowing new threads if the others are not completing, but there is currently no change in the code when moving from Windows to Linux.
This is the HostBuilder with the call to SystemD
 public static IHostBuilder CreateWebHostBuilder(string[] args)
        {
            string curDir = MondayConfiguration.DefineCurrentDir();
            IConfigurationRoot config = new ConfigurationBuilder()
                // .SetBasePath(Directory.GetCurrentDirectory())
                .SetBasePath(curDir)
                .AddJsonFile("servicelocationoptions.json", optional: false, reloadOnChange: true)
#if DEBUG
                   .AddJsonFile("appSettings.Debug.json")
#else
                   .AddJsonFile("appSettings.json")
#endif
                   .Build();
            return Host.CreateDefaultBuilder(args)
                .UseContentRoot(curDir)
                .ConfigureAppConfiguration((_, configuration) =>
                {
                    configuration
                    .AddIniFile("appSettings.ini", optional: true, reloadOnChange: true)
#if DEBUG
                   .AddJsonFile("appSettings.Debug.json")
#else
                   .AddJsonFile("appSettings.json")
#endif
                    .AddJsonFile("servicelocationoptions.json", optional: false, reloadOnChange: true);
                })
                .UseSerilog((_, services, configuration) => configuration
                    .ReadFrom.Configuration(config, sectionName: "AppLog")// (context.Configuration)
                    .ReadFrom.Services(services)
                    .Enrich.FromLogContext()
                    .WriteTo.Console())
                // .UseSerilog(MondayConfiguration.Logger)
                .ConfigureServices((hostContext, services) =>
                {
                    services
                    .Configure<ServiceLocationOptions>(hostContext.Configuration.GetSection(key: nameof(ServiceLocationOptions)))
                    .Configure<HostOptions>(opts => opts.ShutdownTimeout = TimeSpan.FromSeconds(30));
                })
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>();
                    ServiceLocationOptions locationOptions = config.GetSection(nameof(ServiceLocationOptions)).Get<ServiceLocationOptions>();
                    string url = locationOptions.HttpBase + "*:" + locationOptions.Port;
                    webBuilder.UseUrls(url);
                })
                .UseSystemd();
        }
In the meantime I am trying to trace all the Monitor.Enter() that I use to render serial the API endpoints that touch the state of the service and the inner structures, but in Windows seems all ok.
I am starting wondering if the issue in the call to SystemD. I would like to know what is really involved in a call to UseSystemD() but there is not so much documentation around.
I did just find [https://devblogs.microsoft.com/dotnet/net-core-and-systemd/] (https://devblogs.microsoft.com/dotnet/net-core-and-systemd/) by Glenn Condron and few quick notes on MSDN.
EDIT 1: To debug further I created a class to scan the threadpool using ClrMd.
My main service has an heartbeat (weird it is called Ping) as follows (not the add to processTracker.Scan()):
private async Task Ping()
{
await _containerServer.SyslogQueue.Writer.WriteAsync((
LogLevel.Information,
$"Monday Service active at: {DateTime.UtcNow.ToLocalTime()}"));
string processMessage = ProcessTracker.Scan();
await _containerServer.SyslogQueue.Writer.WriteAsync((LogLevel.Information, processMessage));
_logger.DebugInfo()
.Information("Monday Service active at: {Now}", DateTime.UtcNow.ToLocalTime());
}
where the processTrackes id constructed like this:
public static class ProcessTracker
{
static ProcessTracker()
{
}
public static string Scan()
{
// see https://stackoverflow.com/questions/31633541/clrmd-throws-exception-when-creating-runtime/31745689#31745689
StringBuilder sb = new();
string answer = $"Active Threads{Environment.NewLine}";
// Create the data target. This tells us the versions of CLR loaded in the target process.
int countThread = 0;
var pid = Process.GetCurrentProcess().Id;
using (var dataTarget = DataTarget.AttachToProcess(pid, 5000, AttachFlag.Passive))
{
// Note I just take the first version of CLR in the process. You can loop over
// every loaded CLR to handle the SxS case where both desktop CLR and .Net Core
// are loaded in the process.
ClrInfo version = dataTarget.ClrVersions[0];
var runtime = version.CreateRuntime();
// Walk each thread in the process.
foreach (ClrThread thread in runtime.Threads)
{
try
{
sb = new();
// The ClrRuntime.Threads will also report threads which have recently
// died, but their underlying data structures have not yet been cleaned
// up. This can potentially be useful in debugging (!threads displays
// this information with XXX displayed for their OS thread id). You
// cannot walk the stack of these threads though, so we skip them here.
if (!thread.IsAlive)
continue;
sb.Append($"Thread {thread.OSThreadId:X}:");
countThread++;
// Each thread tracks a "last thrown exception". This is the exception
// object which !threads prints. If that exception object is present, we
// will display some basic exception data here. Note that you can get
// the stack trace of the exception with ClrHeapException.StackTrace (we
// don't do that here).
ClrException? currException = thread.CurrentException;
if (currException is ClrException ex)
sb.AppendLine($"Exception: {ex.Address:X} ({ex.Type.Name}), HRESULT={ex.HResult:X}");
// Walk the stack of the thread and print output similar to !ClrStack.
sb.AppendLine(" ------> Managed Call stack:");
var collection = thread.EnumerateStackTrace().ToList();
foreach (ClrStackFrame frame in collection)
{
// Note that CLRStackFrame currently only has three pieces of data:
// stack pointer, instruction pointer, and frame name (which comes
// from ToString). Future versions of this API will allow you to get
// the type/function/module of the method (instead of just the
// name). This is not yet implemented.
sb.AppendLine($" {frame}");
}
}
catch
{
//skip to the next
}
finally
{
answer += sb.ToString();
}
}
}
answer += $"{Environment.NewLine} Total thread listed: {countThread}";
return answer;
}
}
All fine, in Windows it prints a lot of nice information in some kind of tree textual view.
The point is that somewhere it requires Kernel32.dll and in linux that is not available. Can someone give hints on this? The service is published natively without .NET infrastructure, in release mode, arch linux64, single file.
thanks a lot
Alex
I found a way to skip the whole logging of what I needed from a simple debug session.
I was not aware I could attach also to a Systemd process remotely.
Just followed https://learn.microsoft.com/en-us/visualstudio/debugger/remote-debugging-dotnet-core-linux-with-ssh?view=vs-2022 for a quick step by step guide.
The only preresquisites are to let the service be in debug mode and have the NET runtime installed on the host, but that's really all.
Sorry for not having known this earlier.
Alex

Is there a way to retrieve an existing NSUrlSession/cancel its task?

I am creating an NSUrlSession for a background upload using a unique identifier.
Is there a way, say after closing and reopening the app, to retrieve that NSUrlSession and cancel the upload task in case it has not been processed yet?
I tried simply recreating the NSUrlSession using the same identifier to check whether it still contains the upload task, however it does not even allow me to create this session, throwing an exception like "A background URLSession with identifier ... already exists", which is unsurprising as documentation explicitly says that a session identifier must be unique.
I am trying to do this with Xamarin.Forms 2.3.4.270 in an iOS platform project.
Turns out I was on the right track. The error message "A background URLSession with identifier ... already exists" actually seems to be more of a warning, but there is not actually an exception thrown (the exception I had did not actually come from duplicate session creation).
So, you can in fact reattach to an existing NSUrlSession and will find the contained tasks still present, even after restarting the app. Just create a new configuration with the same identifier, use that to create a new session, ignore the warning that's printed out, and go on from there.
I am not sure if this is recommended for production use, but it works fine for my needs.
private async Task EnqueueUploadInternal(string uploadId)
{
NSUrlSessionConfiguration configuration = NSUrlSessionConfiguration.CreateBackgroundSessionConfiguration(uploadId);
INSUrlSessionDelegate urlSessionDelegate = (...);
NSUrlSession session = NSUrlSession.FromConfiguration(configuration, urlSessionDelegate, new NSOperationQueue());
NSUrlSessionUploadTask uploadTask = await (...);
uploadTask.Resume();
}
private async Task CancelUploadInternal(string uploadId)
{
NSUrlSessionConfiguration configuration = NSUrlSessionConfiguration.CreateBackgroundSessionConfiguration(uploadId);
NSUrlSession session = NSUrlSession.FromConfiguration(configuration); // this will print a warning
NSUrlSessionTask[] tasks = await session.GetAllTasksAsync();
foreach (NSUrlSessionTask task in tasks)
task.Cancel();
}

Bulk upload via REST api

I have the goal of uploading a Products CSV of ~3000 records to my e-commerce site. I want to utilise the REST API that my e-comm platform provides so I have something I can re-use and build upon for future sites that I may create.
My main issue that I am having trouble working through is:
- System.Threading.ThreadAbortException
Which I can only attribute to how long it takes to process through all 3K records via a POST request. My code:
public ActionResult WriteProductsFromFile()
{
string fileNameIN = "19107.txt";
string fileNameOUT = "19107_output.txt";
string jsonUrl = $"/api/products";
List<string> ls = new List<string>();
var engine = new FileHelperAsyncEngine<Prod1>();
using (engine.BeginReadFile(fileNameIN))
{
foreach (Prod1 prod in engine)
{
outputProduct output = new outputProduct();
if (!string.IsNullOrEmpty(prod.name))
{
output.product.name = prod.name;
string productJson = JsonConvert.SerializeObject(output);
ls.Add(productJson);
}
}
}
foreach (String s in ls)
nopApiClient.Post(jsonUrl, s);
return RedirectToAction("GetProducts");
}
}
Since I'm new to web-coding, am I going about this the wrong way? Is there a preferred way to bulk-upload that I haven't come across?
I've attempted to use the TaskCreationOptions.LongRunning flag, which helps the cause slightly but doesn't get me anywhere near my goal.
Web and api controller actions are not meant to do long running tasks - besides locking up the UI/thread, you will be introducing a series of opportunities for failure that you will have little recourse in recovering from.
But it's not all bad you have a lot of options here, there is a lot of literature on async/cloud architecture - which explains how to deal with files and these sorts of scenarios.
What you want to do is disconnect the processing of your file from the API request (in your application not the 3rd party)
It will take a little more work but will ultimately create a more reliable application.
Step 1:
Drop the file immediately to disk - I see you have the file on DISK already not sure how it gets there but either way it will work out the same.
Step 2:
Use a process running as
- a console app (easiest)
- a service (requires some sort of install/uninstall of the service)
- or even a thread in your web app (but you will struggle to know when it fails)
Which ever way you choose, the process will watch a directory for file changes, when there is a change it will kick off your method to happily process the file as you like.
Check out the FileSystemWatchers here is a basic example: https://www.dotnetperls.com/filesystemwatcher
Additionally:
If you are interested in running a thread in your Api/Web app, take a look at https://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx for some options.
You don't have to use a FileSystemWatcher of course, you could trigger via a flag in a DB - that is being checked periodically, or a system event.

Windows service - unrecoverable FtpWebRequest timeout when an FTP provider maintenance window occurs

I have a windows service, where every hour on a scheduled basis it downloads an FTP file from an FTP server. It uses the following code to do this:
var _request = (FtpWebRequest)WebRequest.Create(configuration.Url);
_request.Method = WebRequestMethods.Ftp.DownloadFile;
_request.Timeout = 20000;
_request.Credentials = new NetworkCredential("auser", "apassword");
using (var _response = (FtpWebResponse)_request.GetResponse())
using (var _responseStream = _response.GetResponseStream())
using (var _streamReader = new StreamReader(_responseStream))
{
this.c_fileData = _streamReader.ReadToEnd();
}
Normally, the downloading the FTP data works perfectly fine, however every few months the FTP server provider notifies us that some maintenance needs to be performed. So once maintenance is started (usually only 2 or 3 hours), our hourly attempt of a FTP download fails - i.e. it timeout, which is expected.
The problem is that post the maintenance window our windows service continues to timeout every time it attempts to download the file. Our windows service also has retry logic, but each retry also times out.
Once we do a restart of the windows service, the application starts downloading FTP files successfully again.
Does anyone know why we have to restart the windows service in order to recover from this failure?, Could it be a network issue e.g. DNS?
Note 1: There are similar questions to this one already, but they do not involve a maintenance window and they also do not have any credible answers either
Note 2: We profiled the memory of the application and it seems all ftp objects are being disposed of correctly.
Note 3: We executed a console app with same FTP code post maintenance window and it works fine, while the windows service was still timing out
Any help much appreciated
We eventually got to the bottom of this issue albeit not all questions were answered.
We found that when we used a different memory profiler, it showed up that two FtpWebRequest objects were in memory and had not been disposed for days in the process. These objects were what was causing the problem i.e. they were not being properly disposed.
From research, to solve the issue, we did the following:
Set the keep-alive to false
Set the connections lease timeout to a limited timeout value
Set the max idle time to a limited timeout value
Wrapped in a try/catch/finally, where the request is aborted in the finally block
We changed the code to the following:
var _request = (FtpWebRequest)WebRequest.Create(configuration.Url);
_request.Method = WebRequestMethods.Ftp.DownloadFile;
_request.Timeout = 20000;
_request.Credentials = new NetworkCredential("auser", "apassword");
_request.KeepAlive = false;
_request.ServicePoint.ConnectionLeaseTimeout = 20000;
_request.ServicePoint.MaxIdleTime = 20000;
try
{
using (var _response = (FtpWebResponse)_request.GetResponse())
using (var _responseStream = _response.GetResponseStream())
using (var _streamReader = new StreamReader(_responseStream))
{
this.c_fileData = _streamReader.ReadToEnd();
}
}
catch (Exception genericException)
{
throw genericException;
}
finally
{
_request.Abort();
}
To be honest we are not sure if we needed to do everything here but the problem no longer exists i.e. objects do not hang around, the application still functions post a maintenance window so we are happy!

MVC 5 Shared Long Running Task

I have a long running action/method that is called when a user clicks a button on a internal MVC5 application. The button is shared by all users, meaning a second person can come in and click it seconds after it has been clicked. The long running task is updating a shared task window to all clients via SignalR.
Is there a recommended way to check if the task is still busy and simply notifying the user it's still working? Is there another recommended approach? (can't use external windows service for the work)
Currently what I am doing seems like a bad idea or I could be wrong and it's feasible. See below for a sample of what I am doing.
public static Task WorkerTask { get; set; }
public JsonResult SendData()
{
if (WorkerTask == null)
{
WorkerTask = Task.Factory.StartNew(async () =>
{
// Do the 2-15 minute long running job
});
WorkerTask = null;
}
else
{
TempData["Message"] = "Data is already being exported. Please see task window for the status.";
}
return Json(Url.Action("Export", "Home"), JsonRequestBehavior.AllowGet);
}
I don't think what you're doing will work at all. I see three issues:
You are storing the WorkerTask on the controller (I think). A new controller is created for every request. Therefore, a new WorkerTask will always be created.
If #1 weren't true, you would still need to wrap the instantiation of WorkerTask in a lock because multiple clients could reach the WorkerTask == null check at the same time.
You shouldn't have long running tasks in your web application. The app pool could restart at any time killing your WorkerTask.
If you want to skip the best practices advice of "don't do long running work in your web app", you could use the HostingEnvironment.QueueBackgroundWorkItem introduced in .NET 4.5.2 to kick off the long running task. You could store a variable in the HttpApplication.Cache to indicate whether the long running process has been kicked off.
This solution has more than a few issues (it won't work in a web farm, the app pool could die, etc.). A more robust solution would be to use something like Quartz.net or Hangfire.

Categories