Azure pipelines bind to localhost for tests - c#

I am trying to run integration tests in Azure Pipelines by spinning up a web server and hitting it with HTTP requests in a different process.
The error I received is
System.AggregateException : One or more errors occurred. (Failed to bind to address http://127.0.0.1:49159: address already in use.)
The code that seems to generate the error is
var builder = new WebHostBuilder()
.UseUrls("http://localhost:49159")
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.Configure(app =>
{
app.Run(handler);
})
.Build();
builder.Start();
I have tried a bunch of different ports so assume that you just cannot bind to localhost on Azure Pipelines.
Does anyone know if there is a way to achieve what I am trying to do with Azure Pipelines?
Any help would be massively appreciated!

Yes, it's definitely possible. Maybe you're starting the web host more than once and not detroying it (i.e. for each test fixture, etc.)?
In general, these are some of possible ways to run integration tests in the pipeline:
If your integration tests are written in .Net, then consider using WebApplicationFactory to run server & tests in a single process.
If you use different tools for integration tests,
node's start-server-and-test module does a great job of starting your server, waiting for it to be ready and then executing tests (you can use it for .Net projects as well). Here's a sample pipeline for .Net project.

Related

Attach console to .Net Windows Service

I currently have several .Net windows services running on my server. Is there a way to attach a console app to the service to get all of the ILogger data? I have had issues where the service runs perfectly as a console app/worker service but as soon as I run it as a windows service, it just sits there and does nothing.
I did find an article about attaching the VS debugger to the process, but this will not work with our network security.
I am open to any other suggestions as well.
The technical answer is no, but as #Fildor mentioned, you would set up a log sink of some sort. The file logger is just an example, but you can also have the logs send emails, post to some cloud logging service such as splunk or cloudwatch, etc.
One issue you may run into is that you need to capture an error prior to ILogger being available and properly configured for you. Here is a guide I followed for capturing startup errors using NLog: https://alistairevans.co.uk/2019/10/04/asp-net-core-3-0-logging-in-the-startup-class-with-nlog/
Startup classes are no longer necessary in the latest .NET version, so I modified their example to be code you would have in Program.cs:
// NLog: setup the nlog config first
NLogBuilder.ConfigureNLog("nlog.config");
try
{
var host = Host.CreateDefaultBuilder(args)
.ConfigureLogging(logging =>
{
logging.ClearProviders();
logging.SetMinimumLevel(LogLevel.Trace);
})
// Use NLog to provide ILogger instances.
.UseNLog()
.Build();
host.Run();
}
catch (Exception ex)
{
var logger = nlogLoggerProvider.CreateLogger(typeof(Program).FullName);
}
}
Here's the list of available log sinks you can configure in that nlog configuration file: https://nlog-project.org/config/
This same thing can be accomplished with other log providers you may already be using such as Serilog, Log4Net, etc.

How to make available one ASP.NET docker container to another?

I have two services in my solution, which are ASP.NET projects. One of them making actions with database(service named Topics), and another should be a gateway. I am deploying to docker both services with Visual Studio and not writing any commands to run projects.
In Topics service in Launchsettings.json i am defining ssl and http port, and while it runs in docker i can use this service in browser with no problem by address https://localhost:port/something_here.
Here it is a part of Launchsettings.json in Topics service:
"commandName": "Docker",
"launchBrowser": true,
"launchUrl": "{Scheme}://{ServiceHost}:{ServicePort}/topics",
"httpPort": 44381,
"sslPort": 44382,
"publishAllPorts": true,
"useSSL": true
}
But later, when i trying to contact from Gateway service(which is another one docker container) i am receiving an error, that it can't contact my Topics service.
In Gateway i am trying to connect to http://localhost:44381/topics and https://localhost:44382/topics and from none of them i can't receive an answer.
What should i do to be able to connect my docker containers on my localhost?
Localhost is a reserved hostname that always should refer to the machine itself. Within a Docker container, localhost will refer to its own container, not to the host computer. If you want to connect to another Docker container, you can use the hostname of the Docker container.
The hostname of a Docker container is usually the container name, but I saw a container created by Visual Studio does not have the container name configured as Alias on the network. If you use docker-compose as described on https://learn.microsoft.com/en-us/visualstudio/containers/tutorial-multicontainer?view=vs-2019, it is probably a lot easier. Otherwise you might be able to set a hostname via the commandLineArgs property in launchSettings.json.
If you have a container FrontEnd and and container Topics, you can connect from the FrontEnd container by using https://topics. Now please be aware the portnumber is probably also different than the one you are using now! You https://localhost:44382 is probably forwarded to https://topics:443 (since 443 is the default https-port, this is identical to https://topics).
Edit: don't get confused if you also want to do AJAX-calls. In that case the call is done from your browser, which is running on the Host OS in which case you need to use https://localhost:44382 again.
The final way to solve my problem was to run docker doing build and exposing port manually. Visual Studio was doing something wrong, which caused that it was possible to connect to container manually, but impossible with another docker container. So it was mistake to use built-in way to run docker in VS.

IDataProtector unable to decrypt when application runs as a service

I'm using ASP.NET Core data protection with default DI behavior. That works fine when my ASP.NET application is hosted on IIS. Now I have an application that needs to run as a service. So I'm using Microsoft.Extensions.Hosting.WindowsServices to do the windows service part with our standard
Host.CreateDefaultBuilder(args)
.UseWindowsService()
The BackgroundService then hosts ASP.NET Core with your standard
var builder = Host.CreateDefaultBuilder()
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddJsonFile("secrets.json", optional: true, reloadOnChange: true);
}
.ConfigureWebHostDefaults(....)
inside the background service, I can then resolve an instance of IDataProtectionProvider, create a protector, and use it to unprotect my secrets
var dataProtectionProvider = Container.Resolve<Microsoft.AspNetCore.DataProtection.IDataProtectionProvider>();
var protector = dataProtectionProvider.CreateProtector(appName);
var decryptedSecret = protector.Unprocect(some secret)
Now that all works fine as long as I run my application from the CLI. But running it as a service (same file, same location, and of course under the same account), I get an 'invalid payload' exception when I call Unprotect.
I know same path and same account is important, so that's taken care of. I also know that the application can find secrets.json as I wrote some probing code that checks if the file is present and can be read before I even try to unprotect. I'm even checking if the string I'm trying to unprotect is null/empty (which it isn't).
I finally installed a debug build as a service and attached the debugger, and when I look at IDataProtectionProvider, it has a Purpose.. and when running as a service, that's c:\windows\system32. When my app runs from the CLI, it's the path to the exe. So, is there a way to specify the purpose on my own so things behave the same regardless of CLI/Service?
So how can I control the purpose?
So having noted the difference in purpose of the IDataProtectionProvider, I was well on my way of solving this. The solution was to set a static purpose as explained here

How should a GRPC Service be hosted?

I have created a GRPC Server in C# using the example given at Link. Now I want to figure out as how should I be hosting this server so that I achieve following:
Should I make this Server a Console application or a a Windows Service. If I make it a windows Service then updating the service will be cumbersome (which is a big negative) and if I make it a console app then updating will simply need shutting down exe. But that comes with the price of closing the same by mistake. Is there any other better way?
With IIS this issue won't b there as I can simply remove the site from LB and stop the website to perform the update but since GRPC won't be a part of IIS, I am not sure what's the way to get this working.
Any references for the better architecture are welcomed.
We can use Microsoft.Extensions.Hosting pacakge to host a .net core console application by using the HostBuilder API to start building gRPC host and setting it up.
In order to run the gRPC service, we first need to start/stop Grpc.Core.Server in a hosted service. A hosted service is basically a piece of code that is run by the host when the host itself is started and the same for when it is stopped. The following code implement a GrpcHostedService to override IHostedService interface:
using System.Threading;
using System.Threading.Tasks;
using Grpc.Core;
using Microsoft.Extensions.Hosting;
namespace Grpc.Host
{
public class GrpcHostedService: IHostedService
{
private Server _server;
public GrpcHostedService(Server server)
{
_server = server;
}
public Task StartAsync(CancellationToken cancellationToken)
{
_server.Start();
return Task.CompletedTask;
}
public async Task StopAsync(CancellationToken cancellationToken) => await _server.ShutdownAsync();
}
}
In the Program.cs, use HostBuilder API to start building our grpc host and setting it up:
public class Program
{
public static async Task Main(string[] args)
{
var hostBuilder = new HostBuilder()
// Add configuration, logging, ...
.ConfigureServices((hostContext, services) =>
{
// Better to use Dependency Injection for GreeterImpl
Server server = new Server
{
Services = {Greeter.BindService(new GreeterImpl())},
Ports = {new ServerPort("localhost", 5000, ServerCredentials.Insecure)}
};
services.AddSingleton<Server>(server);
services.AddSingleton<IHostedService, GrpcHostedService>();
});
await hostBuilder.RunConsoleAsync();
}
}
By doing this, the generic host will automatically run StartAsync on our hosted service, which in turn will call StartAsync on the Server instance, essentially start the gRPC server.
When we shut down the host with Control-C, the generic host will automatically call StopAsync on our hosted service, which again will call StopAsync on the Server instance which will do some clean up.
For other configuration in HostBuilder, you can see this blog.
I'm going to add one more option.
With dot net core, you can run this as a Linux Daemon now.
Currently gRPC doesn't support integration with ASP.Net/IIS. You would need to host the server in a console or as a Windows service.
Likely you would want this to be a Windows service to make it easier to keep the server running across reboots or crashes. If you want to easily turn your console application into a Windows service I would recommend using the excellent TopShelf Nuget.
Updating the service can be done as you would a console app.
Stop the Windows service. net stop <service-name}>
Copy the updated assemblies.
Start the Windowsservice net start <service-name>
My company (Shortbar) is building the application server for a hotel management system called HOLMS on gRPC. Our setup is as follows:
HOLMS.Application is a .NET class library (assembly) that does the actual work of the server
HOLMS.Application.ConsoleRunner is a C# console application that hosts HOLMS.Application. The console runner is used by (1) developers for convenience (mentioned in the question) as well as (2) production scenarios running inside a Docker container, where the container runtime (e.g. Amazon ECS) implements job control/scaling. It follows "12 factor app" guidelines, including running itself as a single, standalone, stateless process, fast startup/shutdown, and environment-variable config injection. The system logs to stdout which gets drained however stdout is drained in the prod environment (e.g. Sumo, logstash, etc). This is how our SaaS multi-tenant solution will go into production.
HOLMS.Application.ServiceRunner packages HOLMS.Application into a Windows service, for more traditional, on-premise situations where a customer's IT group will run the service themselves. This package uses the Windows registry for configuration and relies on Windows service job control for startup/shutdown/restarts. It logs to the Windows Event Log.
The ConsoleRunner and ServiceRunner each are only about 200 lines of code; for the most part, they just wrap the Application package, and call into it.
Hope this helps.

Unit Testing WCF Service is launching

I'm new to WCF. I've created a basic service and engineer tested it with the debugger and WCFTestClient. I've never written my own WCF client. Now I need to build unit tests for the service.
My classes:
IXService
CXService
CServiceLauncher
(Yes, I know the C prefix does not meet current standards, but it is required by my client's standards.)
My service functionality can be tested directly against XService, but I need to test CServiceLauncher as well. All I want to do is connect to a URI and discover if there is a service running there and what methods it offers.
Other questions I read:
AddressAccessDeniedException "Your process does not have access rights to this namespace" when Unit testing WCF service -
starts service host in unit test
WCF Unit Test - recommends
hosting the service in unit test, makes a vague reference to
connecting to service via HTTP
WCF MSMQ Unit testing -
references MSMQ, which is more detailed than I need
Unit test WCF method - I never knew I could auto generate tests, but the system isn't smart enough to know what to assert.
Test outline:
public void StartUiTest()
{
Uri baseAddress = new Uri("http://localhost:8080/MyService");
string soapAddress = "soap";
IUserInterface target = new CServiceLauncher(baseAddress, soapAddress);
try
{
Assert.AreEqual(true, target.StartUi());
/// #todo Assert.IsTrue(serviceIsRunning);
/// #todo Assert.IsTrue(service.ExposedMethods.Count() > 4);
Assert.Inconclusive("This tells us nothing about the service");
}
finally
{
target.StopUi();
}
}
I just needed to build a simple client.
Reference:
http://webbeyond.blogspot.com/2012/11/wcf-simple-wcf-client-example.html
Add Service Reference to test project
Add to test file:
using System.ServiceModel;
using MyTests.ServiceReferenceNamespace;
Code inside try is now:
Assert.AreEqual(true, target.StartUi());
XServiceClient client = new XServiceClient();
client.GetSessionID();
Assert.AreEqual(CommunicationState.Opened, client.State, "Wrong communication state after first call");
It's not a real answer so please take it easy.
I have been trying to do similar things and what I have learnt that integration testing is difficult. It is difficult because there are many hidden tasks that you need to do, such as:
Make sure you can run the tests regularly
Make sure integration tests can run on the test environment
Maintain different config files, as your environment will be different from the test one
Configure the thing that would automate running of integration tests (CI)
Pray there will be no changes to the paths, test environment, config, hosting platforms etc
Fight security permissions, as usually test thing is not able to host WCF services without admin permissions
Maintain your test harness
To me, this was huge headache and little gain. Don't get me wrong, integration testing is a positive thing, it just requires a lot of time to develop and support.
What have I learnt? Is that do not bother with integration testing of WCF services. Instead I write a lot of unit-tests, to test the contract, state and behaviour. By covering those, I can become sure in a quality of software. And I fight integration of WCF during deployment. This is usually a single battle to configure environment or VM and then next time, deployment goes nice and smooth in an (semi-)automated manner.
Most people would also automate deployment with Chef and alike, those tools can fully configure environment and deploy WCF service.

Categories