Visual Studio - testEnvironments.json - how to configure docker run? - c#

As per MS documentation, the testEnvironments.json can be used for running unit tests in a docker container.
My testsEnvironments.json looks as follows:
{
"version": "1",
"environments": [
{
"name": "debian-net6",
"type": "docker",
"dockerFile": "./testsDockerfile"
}
]
}
Everything works as expected - my unit tests can be executed in a docker container. Now I would like to configure how the actual container is created. For instance, I would like to mount a docker socket, using docker run -v //var/run/docker.sock:/var/run/docker.sock .... or mount other volumes explicitly.
The Visual Studio creates the container (if not running) behind-the-scenes and mounts the solution directory whenever the unit tests are executed. Unfortunately, I haven't been able to find a way how to customize how the container is created.
I am aware of other options, such as running a remote machine or a local WSL distribution, but I am interested mostly in this dockerFile/dockerImage approach.
From what I was able to gather, it seems to me that this sort of configuration is not supported right now (this testsEnvironment.json feature is still in experimental preview). I have also looked into .runsettings file, but it does not seem to offer anything of interest.

Related

API responds locally (vs 2019) but not in local docker container

I am new to Docker and going through documentation and Pluralsight videos. I have a really simple API I am practising/learning to run as a docker container. Everything builds and runs but it doesn't respond to postman request.
However when I run the project in VS 2019 it responds to Postman.
Docker desktop 4.X Linux containers
VS 2019
API:
[HttpGet("basic")]
[AllowAnonymous]
public IActionResult Basic()
{
return Ok("Alive");
}
Postman:
GET
http://localhost:5150/HealthCheck/basic
Docker Compose:
version: "3"
services:
api:
#snipped container name etc for brevity
ports:
- "5150:5150"
networks:
- backend
networks:
backend:
Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim AS base
WORKDIR /app
EXPOSE 5150
#snipped brevity
ENTRYPOINT ["dotnet", "TestingDocker.API.dll"]
Launch settings
{
"iisSettings": {
"windowsAuthentication": false,
"anonymousAuthentication": true,
"iisExpress": {
"applicationUrl": "http://localhost:5150/",
"sslPort": 0
}
},
Compose output:
[12:49:29 INF] Now listening on: http://[::]:80
I don't understand where it gets the port 80 from.
Docker Desktop
Running Port:5150
Docker Inspect on the image
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"ASPNETCORE_URLS=http://+:80",
"DOTNET_RUNNING_IN_CONTAINER=true",
"DOTNET_VERSION=5.0.10",
"ASPNET_VERSION=5.0.10"
],
I read something in troubleshooting this about how Windows Docker Desktop hyper-v's have their own IP address so I tried to find that and see if that would respond but all I could find was the entire subnet defined in docker desktop settings. 192.168.65.0/24
When I look at the hyper v virtual nic configs they have an ip of a totally different subnet. 172.19.240.1
So then I pulled a Mongo image and it DOES respond. When I run inspect on it the only network related difference I can see it that Mongo has a hostname.
So I'm sure it is something with my network settings but I can't figure it out.
Edit
Per Camilo's comment I added back the Entrypoint I had tried earlier.
ENTRYPOINT ["dotnet", "DockerTest.API.dll", "--server.urls", "http://0.0.0.0:5150"]
I thought 0.0.0.0 was a wildcard ip binding but that didn't work so I switched to suggested
ENTRYPOINT ["dotnet", "DockerTest.API.dll", "--urls", "http://+:5150"]
And that worked!!
One thing to note. All these tiny little changes didn't seem to be getting picked us as supposedly a new image was building in like .03 seconds. So to be sure my changes throughout this process were being picked up I used
docker-compose build --no-cache
docker-compose up -d --force-recreate
You have an environment variable like:
"ASPNETCORE_URLS=http://+:80"
That one is created automatically by ASP.NET Core when it's not given any specific URL to use, which is the case in this question.
Your confusion probably comes from the launch.settings.json file, which appears to be configuring the app to run on port 5150, but that is not correct for 2 reasons:
The file is only for Visual Studio debugging, and it's not published with the application, and
Even if the file was used, you're configuring the iisExpress settings, and IIS Express doesn't exist on Linux.
You have a few ways to solve this issue:
Add a Docker map for the port, as mentioned by #OneCricketeer, by using 5150:80.
Specify the URL to use as part of the ENTRYPOINT:
ENTRYPOINT ["dotnet", "DockerTest.API.dll", "--urls", "http://+:5150"]
As well as others, as specifying the URLs directly in your Program, when building the WebHost.
Those logs are internal to the container. It's not clear how you've told your server to run on 5150, but EXPOSE doesn't change the code behavior, and there appears to be this ASPNETCORE_URLS environment variable which controls the server binding, and you've not overridden this anywhere
If you simply want to use localhost:5150, with no other changes, you would need to use 5150:80 as your compose ports definition

Access a docker network during build of docker-compose v2?

Summary
Having a docker-compose.yml file that builds an image like this:
services:
my-service:
build:
context: My.VS.AppFolder
networks:
- my-docker
networks:
my-docker:
external: true
the defined network is only avaliable in the ENTRYPOINT. But not during build. How can I access another container on the same network my-docker during build of the Dockerfile?
Use case: Fetching NuGet packages from private repo
Description of the use case
I have an ASP.NET Core MVC 2.1 web application. For a better seperation of concerns, I want to implement certain features in another seperated web app (e.g admin interface). To avoid copying shared things like app layout (Razor) or some helper utility classes, I created a shared project for those things.
So there are three projects now:
MyApp (The original application)
MyApp.Core (For shared things)
MyApp.Admin
Since MyApp.Core needs to be referenced from the other projects, I installed BaGet as simple NuGet hosting repo for my docker build environment. This container is internally referenced with it's DNS name in nuget.config, created at solution level of MyApp (same on new MyApp.Admin but let's focus on MyApp for simplicity).
The problems
In the Dockerfile of MyApp I'm doing now this:
RUN dotnet restore --configfile nuget.config
and need to access the dns name called baget on the my-docker network. Researchs show that this is only possible with at least version 3.4 of docker-compose, and seems still not officially documentated. But Docker removed several options from v2 in v3, for example ressource limits like mem_limits I'm using. From v3, they're only avaliable using swarm, not on single nodes any more.
So I currently don't see any solution than migrating to v3 and swarm, which would cause extra work and complexity without benefits other than this networking issue. My project isn't that big that swarm is required.
I found two ways of working around this problem:
Build with docker cli tool instead of docker-compose
Instead of docker-compose up -d --build, I manually build the image using dockers CLI tool because it has a --network switch that allows specifying the network during build:
docker build --network my-docker -t my-service:latest --build-arg ASPNETCORE_ENVIRONMENT=Development My.VS.AppFolder
Now reference this image in docker-compose.yml instead of building it there:
services:
my-service:
image: my-service:latest
After the image was build, run docker-compose up -d without the --build flag. This causes a bit of overhead since you have to CLI calls and for real tagging like alpine-3.2.1, this tag need to be specified with an env variable and passed to both docker/docker-compose. But it seems the best working alternative for productive usage.
Compatibility mode
There is a --compatibility switch in docker-compose since 1.20.0 that allows using the new v3 file version, but map swarm options like ressource limits to the locally v2 form. In other words: You can use specifiy ressources in v3 and they apply on docker-compose when this switch is used. Otherwise, they would be ignored on docker-compose and only have effect with docker stack deploy.
So with this switch, you can profit from the ability of defining a network during build, without loosing ressource limits. But the documentation warns that this is not stable enough for productive usage:
We recommend against using --compatibility mode in production. Because the resulting configuration is only an approximate using non-Swarm mode properties, it may produce unexpected results.
For this reason, I don't consider it as a real solution and use the first approach of building the image using dockers CLI where specifiying a network is possible.

How can I see changes in c# application running on Docker container?

I have ASP.NET MVC application in .NET Core. I run it in Docker using docker-compose.yml and command:
docker-compose up –d
Now I can display my website on http://localhost:5000.
But if I change something in class or cshtml file I don't see those changes on http://localhost:5000. What should I do? Stop container and what next? Or something else?
You should rebuild your container
docker-compose up –d --build
You can rebuild specific containers in docker-compose like this
docker-compose up –d --build service_name_1 service_name_2
from this source https://docs.docker.com/compose/reference/build/
docker-compose build
Estimated reading time: 1 minute
Usage: build [options] [--build-arg key=val...] [SERVICE...]
Options:
--compress Compress the build context using gzip.
--force-rm Always remove intermediate containers.
--no-cache Do not use cache when building the image.
--pull Always attempt to pull a newer version of the image.
-m, --memory MEM Sets memory limit for the build container.
--build-arg key=val Set build-time variables for services.
--parallel Build images in parallel.
Services are built once and then tagged, by default as project_service. For example, composetest_db. If the Compose file specifies an image name, the image is tagged with that name, substituting any variables beforehand. See variable substitution.
If you change a service’s Dockerfile or the contents of its build directory, run docker-compose build to rebuild it.

Passing KeyVault secrets to .net core 2 xUnit/MsTest in VSTS

I have several secrets stored in Azure KeyVault. Unfortunately I cannot find a way to pass parameters to my .net Core 2.0 test run via VSTS (Visual Studio Team Services)
Documentation says that Keyvault secrets can only be supplied via VSTS variables - fair enough - but how do I actually do this? All the information I can find on web seems outdated or doesn't work.
For example - consider RunSettings file:
<RunSettings>
<TestRunParameters>
<Parameter name="webAppUrl" value="http://localhost" />
<Parameter name="webAppUserName" />
<Parameter name="webAppPassword" />
</TestRunParameters>
</RunSettings>
I tried passing values for last 2 parameters via cmd line as follow:
vsts.console MyTest.dll /Settings:vsts.runsettings -- -webAppUserName foo
vsts.console MyTest.dll /Settings:vsts.runsettings -- webAppUserName=foo
dotnet test -s vsts.runsettings -- -webAppUserName foo
dotnet test -s vsts.runsettings -- webAppUserName=foo
but this has no effect - the webAppUserName value remains null (I can see a value for webAppUrl, so I know my code is right!)
I've also tried both the VSTS "dotnet test" task as well as the "VsTest" task from my VSTS 2017 build. The VsTest provides a Override test run parameters setting - and as per the tooltip, I tried:
-webAppUserName user -webAppPassword somethingSecret
Again - no effect!
Originally I used xUnit and had exactly the same issue - i.e. coudn't figure out a way to pass parameters via VSTS - so I tried MSTest, but same issue.
Going back to my original issue of injecting KeyVault secrets
Locally via VS Studio I was able to just do the following from my test:
var config = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true)
...
.AddAzureKeyVault(...)
Unfortunately, when run via VSTS, the test runner just hangs (i.e. I had to stop it after several minutes) with no log output for test step if I use either
.AddAzureKeyVault(...) or .AddEnvironmentVariables()
So I tried using VSTS Variable Groups - and linking that to KeyVault. However, keyvault secrets are not accessible via environment variables (using Enviroment.GetEnvironmentVariable(...) directly from C#) - so that's no good. They say you can only pass these to tasks via VSTS variables...hence my problem!
Aside:
Even if I could use environment variables, it's not optimal because when using .AddAzureKeyVault() I can supply a custom IKeyVaultSecretManager to, for example, replace a special delimiter with the ':' character - this means that I can nest my json config values - e.g. if I had this in my config file:
{ "A" : { "B" : "somevalue" } }
then using normal configuration builder I can access the above via config["A:B"]. Unfortunately KeyVault doesn't like the ":" character - so you have to replace it with something like "--" and then use a custom IKeyVaultSecretManager to replace "--" with ":" (which works great and ensures that variables are properly overridden based on the order of providers registered in config builder)
Please help! All I wanted for Christmas was not to put my KeyVault secrets into Git... but the VSTS KeyVault grinch is spoiling my fun... surely I'm missing something??
Just an update
I've recently set up a new xUnit .NET Core 2.0 project. For this project I had no issues passing environment variables to Azure DevOps. Basically, all you do is:
Create a Build pipeline variable - e.g. ASPNETCORE_ENVIRONMENT
Create Visual Studio Test Task
In your xUnit project, simply use System.Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT")
Given that the above works, .AddEnvironmentVariables() configuration method should just work fine now aswell.
This mechanism can be used to load a different config files per test environments too.
For me, this is all I needed as I have a nice easy way to pass environment-specific options to integration tests
The format of overriding test run parameters is like AppURL=$(DeployURL);Port=8080 instead of -name value
Supplying Run Time Parameters to Tests

How do I run Typescript tests for Jasmine\Karma through a TFS Build Process

I have a web application using Angular 1.5 with bower/npm/gulp coded in Typescript to do our build. Our back end is a c# .net WebApi2. Both are built and deployed on TFS2015. My c# nUnit tests are easy to integrate as part of the build process. The Typescript jasmine unit tests however are more difficult to integrate. How do I get my Typescript jasmine unit tests to run as part of the TFS build and if they fail, fail the build? We have them running through a Jasmine Spec runner and also Karma but not integrated.
I have read the many posts on StackOverflow integrating Javascript unit tests and each avenue took me through an overly complex solution that didn't work. These include Powershell scripts, Chutzpah amoungst others.
Rather than try to recreate the Specrunner via Chutzpah on the build server, which I found difficult to configure and get working. The aim was to get karma to output the running tests in the 'trx' test format that TFS recognises and then publish them to the build. Please note I am using PhantomJs to run my tests through Karma but won't cover that here as it is well covered elsewhere.
1) install the karma-trx-reporter plugin via npm into your web project (or similar plugin)
2) Configure the Karma.config to include the trx reporter
reporters: ['dots', 'trx'],
trxReporter: { outputFile: 'test-results.trx' },
// notify karma of the available plugins
plugins: [
'karma-jasmine',
'karma-phantomjs-launcher',
'karma-trx-reporter',
],
3) Create a Gulp (or grunt) task to run the karma tests if you don't already have one. Run the task locally and check it creates the 'test-results.trx' specified above. (It doesn't matter where the file is created on the build server):
gulp.task('test', function () {
return gulp.src(['tests/*.js']).pipe(karma({
configFile: __dirname + '/Testing/karma.config.js',
singleRun: true
}));
});
4) Add a Gulp (or Grunt) TFS build task to run the karma tests created in the previous step and output the trx file.
5) Add a Gulp (or Grunt) TFS build task to Publish the test results and merge them into the build. Note that the "Test Result Files" path is a wild card **/*.trx to find any trx files in the build path (i.e. finds our previously created file). "Merge Test results" is checked to Merge both our Jasmine test run and our c# test run into the same session. "Continue on error" is unticked to ensure any jasmine test failures break the build.
You will notice two sets of tests that have been run and included as part of the build!

Categories