I'm just getting started with Docker and have installed docker for windows.
The basic setup of docker is correct and i have been able to debug a simple Asp.Net Core app which is deployed to a container from within Visual studio (using the standard 'Run' command targeting docker).
The problem i'm having is being able to hit the endpoint hosted from within the container without using localhost i.e. using the IP of the container. I need this as i'm intending to hit the endpoint from a xamarin app.
After doing some reading, it seems i need to 'publish' the port that the application is running, in this case port 5000, but i can't seem to find where to configure visual studio to do this.
Using postman or a web browser to hit the endpoint results in the same response Empty_Response error.
I'm hoping someone can point me in the right direction
My Dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim AS base
WORKDIR /app
EXPOSE 5000
ENV ASPNETCORE_URLS http://<container ip>:5000
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["ItemCheckout/ItemCheckout.csproj", "ItemCheckout/"]
RUN dotnet restore "ItemCheckout/ItemCheckout.csproj"
COPY . .
WORKDIR "/src/ItemCheckout"
RUN dotnet build "ItemCheckout.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "ItemCheckout.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "ItemCheckout.dll"]
Startup.cs:
public class Startup
{
private static readonly LoggerFactory _loggerFactory = new LoggerFactory(new []{new DebugLoggerProvider()});
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
services.AddDbContext<ItemCheckoutDbContext>(o =>
{
o.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"));
o.UseLoggerFactory(_loggerFactory);
});
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseMvc();
}
}
program.cs:
public class Program
{
public static async Task Main(string[] args)
{
await CreateWebHostBuilder(args).Build().RunAsync();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseKestrel()
.UseUrls("http://<container ip>:5000")
.UseStartup<Startup>();
}
Output when running:
Hosting environment: Development
Content root path: /app
Now listening on: http://<container ip>:5000
Application started. Press Ctrl+C to shut down.
EDIT: Updated program.cs as per #MindSwipe's suggestion, however i am still getting the same result
Port mapping must be specified during docker run command. EXPOSE command within dockerfile normally used for documentation purposes.
Solution: in your Visual Studio project file add the following:
<PropertyGroup>
<DockerfileRunArguments>-p 5000:5000</DockerfileRunArguments>
</PropertyGroup>
References:
https://learn.microsoft.com/en-us/visualstudio/containers/container-msbuild-properties?view=vs-2019
https://docs.docker.com/engine/reference/builder/#expose
OK - from what I see - you're confusing the port on the container with the port on the host.
You have 2 sides to port mapping in Docker - the host side and the client/container side.
There's a couple of ways to solve the issue that you can't connect.
With the container running (F5 debug, command line) - execute the following command:
"docker ps -a"
You will see a list of running containers; there's a column called "Ports". Find the entry for your container - probably the solutionname:dev. Look at the ports column, you will see AA -> BB.
AA is the port you need in your browser - that is your host port - so http://localhost:AA, or http://IP-Address:AA/
BB is the listening port on the container - in your case 5000.
Does this help?
EDIT:
change this line: ENV ASPNETCORE_URLS http://:5000
to ENV ASPNETCORE_URLS http://+:5000
This is relative to the container - not the host.
Related
i developed a Blazor Server App, which uses EF-Core to access a MariaDB.
I want to publish the app (for production) to a linux-vServer with Debian 10 (could also use Ubuntu) managed with Plesk.
The App should run in a docker-container. I'm completely new to docker.
The MariaDB is running over Plesk, not containerized, I want to access it over localhost,3306.
I do have root and shell access.
This is my docker-file:
FROM mcr.microsoft.com/dotnet/aspnet:6.0
COPY . /app
WORKDIR /app
EXPOSE 8700/tcp
ENV ASPNETCORE_URLS http://*:8700
ENV ASPNETCORE_ENVIRONMENT docker
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
#EXPOSE 80
#EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["WebApplication/WebApplication.csproj", "WebApplication/"]
COPY ["InvoicingDocuments/InvoicingDocuments.csproj", "InvoicingDocuments/"]
COPY ["DataModels/DataModels.csproj", "DataModels/"]
COPY ["SmtpMail/SmtpMail.csproj", "SmtpMail/"]
RUN dotnet restore "WebApplication/WebApplication.csproj"
COPY . .
WORKDIR "/src/WebApplication"
RUN dotnet build "WebApplication.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "WebApplication.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "WebApplication.dll"]
I was running the command:
docker run -d -p 80:8700 --net host cpp-blazor
I added the --net host so i can access the localhost-mariaDb.
First i encountered the error:
System.InvalidOperationException: Unable to configure HTTPS endpoint. No server certificate was specified, and the default developer certificate could not be found or is out of date.
even tho the Homepage has SSL-certificate setup in Plesk. the application did exit.
My first approach to this issue was to Configure Kestrel with a pfx-file following the video: Custom HTTPS Dev Environment using .NET Core, Kestrel & certificates
My Program.cs now contains the following:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
namespace WebApplication
{
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureServices((context, services) =>
{
HostConfig.CertPath = "certificate.pfx";
HostConfig.CertPassword = "password";
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.ConfigureKestrel(opt =>
{
//opt.ListenAnyIP(8701);
opt.ListenAnyIP(8700, listOpt =>
{
listOpt.UseHttps(HostConfig.CertPath, HostConfig.CertPassword);
});
});
webBuilder.UseStartup<Startup>();
});
}
public static class HostConfig
{
public static string CertPath { get; set; }
public static string CertPassword { get; set; }
}
}
certificate.pfx is copied on build and is actually found by the application.
The pfx file is a SSL Wildcard from DigiCert, i got it from the hosting provider.
It works fine in Plesk.
I get the following output from the docker log:
{"EventId":0,"LogLevel":"Warning","Category":"Microsoft.AspNetCore.Server.Kestrel","Message":"Overriding address(es) \u0027http://\u002B:80\u0027. Binding to endpoints defined via IConfiguration and/or UseKestrel() instead.","State":{"Message":"Overriding address(es) \u0027http://\u002B:80\u0027. Binding to endpoints defined via IConfiguration and/or UseKestrel() instead.","addresses":"http://\u002B:80","{OriginalFormat}":"Overriding address(es) \u0027{addresses}\u0027. Binding to endpoints defined via IConfiguration and/or UseKestrel() instead."}}
{"EventId":14,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Now listening on: https://[::]:8700","State":{"Message":"Now listening on: https://[::]:8700","address":"https://[::]:8700","{OriginalFormat}":"Now listening on: {address}"}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Application started. Press Ctrl\u002BC to shut down.","State":{"Message":"Application started. Press Ctrl\u002BC to shut down.","{OriginalFormat}":"Application started. Press Ctrl\u002BC to shut down."}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Hosting environment: Production","State":{"Message":"Hosting environment: Production","envName":"Production","{OriginalFormat}":"Hosting environment: {envName}"}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Content root path: /app","State":{"Message":"Content root path: /app","contentRoot":"/app","{OriginalFormat}":"Content root path: {contentRoot}"}}
Using the same docker run command, i managed to find the app running under https domain.com:80 but I get the error SSL_ERROR_RX_RECORD_TOO_LONG when navigating in the browser to that domain. The docker container does not crash.
It works locally on my machine, just telling me the certificate is not trusted due its assigned to my domain and not localhost. I can access the App.
I already tried removing the certificates from plesk, resulting in the same error or not reaching the webpage.
Any suggestions or tipps, how to properly setup certificate for Blazor Server or ASP.NET Core Hosting in that case under Linux?
I dont think that Plesk is the issue, since im running in the same problems when using shell only.
I have a solution which contains 2 microservices (Product & Review). Each of the 2 API projects have a Dockerfile defined, here is an example of the review dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
ENV ConnectionString:Review="Data Source=db,1433;Initial Catalog=Review;uid=sa;pwd=TesT111t!;Integrated Security=false;MultipleActiveResultSets=True"
COPY ["Review.Api/Review.Api.csproj", "Review.Api/"]
COPY ["Review.Data/Review.Data.csproj", "Review.Data/"]
COPY ["Review.Service/Review.Service.csproj", "Review.Service/"]
RUN dotnet restore "Review.Api/Review.Api.csproj"
COPY . .
WORKDIR "/src/Review.Api"
RUN dotnet build "Review.Api.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Review.Api.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Review.Api.dll"]
As you can see the dockerfile contains an env variable ConnectionString.
I have a breakpoint in the startup of the review api project to inspect what is inside the Configuration, here is what the file looks like:
public class Startup
{
public IConfiguration Configuration { get; }
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public void ConfigureServices(IServiceCollection services)
{
services.AddDbContext<ReviewDbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("Review")), ServiceLifetime.Transient);
services.AddAutoMapper(typeof(FindProductReviews));
services.AddAutoMapper(typeof(FindReview));
services.AddAutoMapper(typeof(ReviewController));
services.AddAutoMapper(typeof(ProductController));
services.AddMediatR(typeof(FindProductReviews.Handler).Assembly);
services.AddMediatR(typeof(FindReview.Handler).Assembly);
services.AddMediatR(typeof(CreateReview.Handler).Assembly);
services.AddControllers();
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
endpoints.MapDefaultControllerRoute();
});
}
}
My solution has a docker-compose file and when running docker-compose up -d the breakpoint is hit, however when looking inside the Configuration their is no connection string which is defined like the one in the dockerfile.
I feel like im missing something small, I've looked at the documentations and cannot find what I'm missing
Application initial configuration
First of all, please confirm that, when your program starts, you are building your configuration such as below (typical setups), in order to bind it with env.variables.
// Defining a configuration that explicitly uses env. variables
public static IConfiguration Configuration { get; } = new ConfigurationBuilder()
...
.AddEnvironmentVariables()
.Build();
or
// The default builder already adds env. variables
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
...
Docker
Please, replace ConnectionString:Review by ConnectionString__Review (reference here).
If you want to keep on setting the env.variable in your Dockerfile, you can move it to the last section final, such as:
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENV ConnectionString__Review="Data Source=db,1433;Initial Catalog=Review;uid=sa;pwd=TesT111t!;Integrated Security=false;MultipleActiveResultSets=True"
ENTRYPOINT ["dotnet", "Review.Api.dll"]
However, to enhance decoupling, you may prefer to set the variable in your docker-compose file.
You may also consider using the appsettings JSON files, since they allow much flexibility across environments.
Multiple connection configurations
If your services are able to access shared configurations, but also have their own, then, you can specify proper names to each configuration, to ensure that any mistake happens :)
You only define the ENV variable in the build image. If you follow the FROM chain, the final image is built FROM base, and COPY --from=build, but the COPY only copies files and not environment variables or other metadata.
Your question title and tags hint at a Compose-based setup. Connection information like the database host name and (especially) credentials need to be configured at deployment time, not part of the image. I'd remove this line from your Dockerfile entirely, and instead include it in your docker-compose.yml:
version: '3.8'
services:
db: { ... }
app:
environment:
- ConnectionString:Review=Data Source=db,1433;Initial Catalog=Review;uid=sa;pwd=TesT111t!;Integrated Security=false;MultipleActiveResultSets=True
...
Below is the code from the base template in Visual Studio 2019 to create a Worker Service. When deploying this service to a linux container, the code CancellationToken in the ExecuteAsync method is never cancelled when the container is requested to stop. After stopping the container in various ways, I see 10 more "Worker running at" log entries and then the container stops. This is consistent with what I've read: docker sends a SIGTERM command, waits 10 seconds, then sends a SIGKILL command if the container doesn't stop on its own.
What I am searching for is an answer to how that CancellationToken is supposed to get set to Cancelled when the container is requested to stop? My end goal is to make sure any transactions in flight are completed or rolled back and other such cleanup work before the container is suddenly stopped.
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureServices((hostContext, services) =>
{
services.AddHostedService<Worker>();
});
}
public class Worker : BackgroundService
{
private readonly ILogger<Worker> _logger;
public Worker(ILogger<Worker> logger)
{
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
_logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
await Task.Delay(1000, stoppingToken);
}
}
}
I have tried subscribing to the AppDomain.CurrentDomain.ProcessExit event as that seems to be what most people around the web say to do (without any examples), but either the way I implemented it doesn't work or that just isn't the correct thing to do. Running the container from VS2019, the code of the event handler is never executed. Below is how I tried to implement that:
public static void Main(string[] args)
{
using (CancellationTokenSource cts = new CancellationTokenSource())
{
AppDomain.CurrentDomain.ProcessExit += (sender, eventArgs) =>
{
cts.Cancel();
};
var host = CreateHostBuilder(args).Build();
Task task;
task = host.RunAsync(cts.Token);
task.Wait(cts.Token);
}
}
Update:
I'm guessing the problem has something to do with the way template is building the docker file and starting the container. The SIGTERM is sent to PID 1 and looking at the results of ps -ef on the container, the calling of tail is PID 1 instead of anything the within the .netcore solution. I guess the next question would be how to fix this?
Update (Requested Docker File)
The docker file is the default docker file as Visual Studio creates for this template.
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/runtime:5.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["WorkerService2/WorkerService2.csproj", "WorkerService2/"]
RUN dotnet restore "WorkerService2/WorkerService2.csproj"
COPY . .
WORKDIR "/src/WorkerService2"
RUN dotnet build "WorkerService2.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "WorkerService2.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "WorkerService2.dll"]
This is actually an issue with running the containers from Visual Studio inside of Docker for Windows (I'm guessing Docker for Mac will be similar) while debugging.
After reading through this link (https://aka.ms/containerfastmode) that is in the generated dockerfile last night, when running the images from Visual Studio to test, the entry point will not be the netcore application, so PID1 will not be the .netcore app and it will not receive the SIGTERM signal. However, the .netcore app should be the entry point when not started via Visual Studio.
The entry point is tail -f /dev/null, which is an infinite wait to
keep the container running. When the app is launched through the
debugger, it is the debugger that is responsible to run the app (that
is, dotnet webapp.dll). If launched without debugging, the tooling
runs a docker exec -i {containerId} dotnet webapp.dll to run the app.
After doing a build in Visual Studio, I tested the image with the :latest tag for my application from Docker for Windows directly. When I did this, the ENTRYPOINT in the dockerfile was used and PID 1 was the .netcore application as expected. Upon stopping the container, the CancellationToken.Cancel() was triggered and an OperationCancelledException was thrown.
Below is basic code for the template that I'd use going forward in order to account for the CancellationToken getting Cancelled and throwing an exception when it does.
{
try
{
_logger.LogInformation($"WORKER STARTING at: {DateTimeOffset.Now}");
while (!stoppingToken.IsCancellationRequested)
{
_logger.LogInformation("$Worker running at: {DateTimeOffset.Now}");
await Task.Delay(1000, stoppingToken);
}
_logger.LogInformation($"Exited While Loop: {DateTimeOffset.Now} (Will never get hit as is)");
}
catch(OperationCanceledException)
{
_logger.LogInformation("!!!OperationCancelled!!! Start clean up from here");
}
catch(Exception ex)
{
_logger.LogCritical(ex, $"Exception Caught: {ex.GetType().FullName}");
}
_logger.LogInformation($"WORKER STOPPING at: {DateTimeOffset.Now}");
}
I have a dotnet project and when I ran it inside docker container the build goes right and the program starts (it logs that everything is fine), but when I make a request in postman, I got a socket hang up error. what can be a reason? When I manually start a server by dotnet run command it works fine.
Here is my dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:5.0-buster-slim AS build
WORKDIR /src
COPY ["kisc.csproj", ""]
RUN dotnet restore "./kisc.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "kisc.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "kisc.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
EXPOSE 5000
ENTRYPOINT ["dotnet", "kisc.dll"]
And the containers logs
warn: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to bind to http://localhost:5000 on the IPv6 loopback interface: 'Cannot assign requested address'.
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: /app
Did you set the default url in your app?
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
webBuilder.UseUrls("http://localhost:5000", "https://localhost:8001");
});
}
Or Set Enviroment With port in your dockerfile
ENV ASPNETCORE_URLS = "http://localhost:5000"
Look Here
I am trying to "dockerize" this clean architecture template for .net Core 3. I use the docker pull request here as the base for my proff of concept app. This is a .net core 3 webapi project with an Angular front end as the client app.
WHAT I HAVE:
The base code from the pull request works locally.
An initial problem I had to overcome was setting the cert for identity server 4 in a local non development env, I had to mount a volume with the cert and reference it from the appsettings.json file like
"IdentityServer": {
"Key": {
"Type": "File",
"FilePath": "/security/mycert.pfx",
"Password": "MyPassword"
}
}
I set up a CI/CD pipeline in azure to build the project and deploy the image to an azure container registry
I set up a CI/CD release to deploy the docker image to a Web App for Containers (Linux) web app. Both these steps work properly
MY PROBLEM:
The web app loads and runs the container and the angular front end is shown. However, it appears that the web api is not running. Any attempt to hit an endpoint of the web api returns the following error in the browser console:
GET https://.....azurewebsites.net/_configuration/CleanArchitecture.WebUI 404 (Not Found)
Error: Uncaught (in promise): Error: Could not load settings for 'CleanArchitecture.WebUI' Error: Could not load settings for 'CleanArchitecture.WebUI'
CleanArchitecture.WebUI is the name of the assembly that is the entry point in the dockerfile:
ENTRYPOINT ["dotnet", "CleanArchitecture.WebUI.dll"]
All other aspects of the front end work properly, only calls to "backend" api fail.
Another issue is that if I get the docker logs from the azure container, there are no errors shown.
WHAT I TRIED
I tried to add "dotnet CleanArchitecture.WebUI.dll" to the startup command of the container in the container settings of the web app, but that just throws an error that it can't find CleanArchitecture.WebUI.dll
I have tried to increase the logging level ("LogLevel": "Default": "Debug") to get more details, but no additional error details are shown in the docker logs.
It might be an error loading the Identity Server 4 certificate, but there are no errors to confirm this problem.
Here is my docker compose file that is used by the azure pipeline:
version: '3.4'
services:
webui:
image: ${DOCKER_REGISTRY-}webui
build:
context: .
dockerfile: src/WebUI/Dockerfile
environment:
- "UseInMemoryDatabase=false"
- "ASPNETCORE_ENVIRONMENT=Production"
- "ConnectionStrings__DefaultConnection=myconnection"
- "ASPNETCORE_Kestrel__Certificates__Default__Password=mypass"
- "ASPNETCORE_Kestrel__Certificates__Default__Path=/security/mycert.pfx"
ports:
- "5000:5000"
- "5001:5001"
volumes:
- mcpdata:"/security:/"
restart: always
mcpdata is the name of the azure file share that gets mounted and contains the actual cert
here is my azure-pipeline.yml for the CI/CD:
trigger:
- staging
resources:
- repo: self
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: '****'
imageRepository: 'cleanarchitecture'
containerRegistry: '****.azurecr.io'
dockerComposeFilePath: '$(Build.SourcesDirectory)docker-compose.Production.yml'
tag: '$(Build.BuildId)'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerComposeFile: $(dockerComposeFilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: staging
QUESTION?
Can someone help me figure out why it appears like my web api is not running but no errors are thrown. At a minimum I would be happy if someone could help me see the errors in the docker logs.
thanks in advance
I tried to repeat, with "clean architecture" using the following (note, I'm using zsh on MacOS, but the same should work on Windows/Linux too):
take clean_architecture
dotnet new --install Clean.Architecture.Solution.Template
dotnet new ca-sln
The documentation suggests, clicking F5 in Visual Studio will start the template, although I had to do:
cd src/WebUI/ClientApp
npm install
At this point the app starts locally by hitting F5. Note, what happens here is that ASP.Net Core forwards requests to the dev server, so effectively, this does ng serve --port 53543 AND starts Asp.Net Core (Kestrel in my case) on port 5001, browsing to http://127.0.0.1:53543 provides the angular page directly. Browsing to https://localhost:5001 brings up the same angular page, as forwarded by ASPNetCore to Angular. All very confusing... Detailed more here
Note in Startup.cs the following lines of code exist, these are usually set based on the environment variable ASPNETCORE_ENVIRONMENT
if (!env.IsDevelopment())
{
app.UseSpaStaticFiles();
}
-- and within "app.UseSpa"
if (env.IsDevelopment())
{
spa.UseAngularCliServer(npmScript: "start");
}
Anyway, it looks like you've got that environment variable set to Production, which should just serve the built files from the ClientApp\dist folder (rather than forwarding to the dev server) that suggests that if you see the Angular, then the .Net Core service is running... I'll try and rebuild the Dockerfiles first...
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
ENV ASPNETCORE_URLS=https://+:5001;http://+:5000
WORKDIR /app
EXPOSE 5000
EXPOSE 5001
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt install -y nodejs
WORKDIR /src
COPY ./src/WebUI/WebUI.csproj src/WebUI/
COPY ./src/Application/Application.csproj src/Application/
COPY ./src/Domain/Domain.csproj src/Domain/
COPY ./src/Infrastructure/Infrastructure.csproj src/Infrastructure/
RUN dotnet restore "src/WebUI/WebUI.csproj"
COPY . .
WORKDIR "/src/src/WebUI"
RUN dotnet build "WebUI.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "WebUI.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CleanArchitecture.WebUI.dll"]
Then build and run as follows:
# build takes a while
docker build -f ./src/WebUI/Dockerfile -t clean-architecture .
# note, this fails first time, because I set up as clean_architecture so the entry point is incorrect
docker run --rm -it -p 5000:5000 -p 5001:5001 clean-architecture
# run the container and override the entrypoint
docker run --rm -it --entrypoint /bin/bash clean-architecture
# From within the container...
root#93afb0ad21c5:/app# dotnet clean_architecture.WebUI.dll
# note, in .Net 3.1, you can also do this directly, as follows:
root#93afb0ad21c5:/app# ./clean_architecture.WebUI
Now there is a problem with LocalDB: System.PlatformNotSupportedException: LocalDB is not supported on this platform.
Switch appsettings.Production.json to be "UseInMemoryDatabase": true
The problem then appears to be certificates...
I created a certificate using:
dotnet dev-certs https -ep ./https/clean-arch.pfx -p anything
For IdentityServer, I change appSettings.Production.json as follows:
"IdentityServer": {
"Key": {
"Type": "File",
"FilePath": "/app/https/https/clean-arch.pfx",
"Password": "anything"
}
}
and then running on Linux, probably means running Kestrel, which means we need to provide HTTPS certs there too, which I did by setting the following in Program.cs
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.ConfigureKestrel((context, options) =>
{
options.AllowSynchronousIO = true;
options.Listen(IPAddress.Loopback, 5000, listenOptions =>
{
listenOptions.UseConnectionLogging();
listenOptions.Protocols = Microsoft.AspNetCore.Server.Kestrel.Core.HttpProtocols.Http1AndHttp2;
});
options.Listen(IPAddress.Any, 5001, listenOptions =>
{
listenOptions.UseConnectionLogging();
listenOptions.Protocols = Microsoft.AspNetCore.Server.Kestrel.Core.HttpProtocols.Http1AndHttp2;
listenOptions.UseHttps(new System.Security.Cryptography.X509Certificates.X509Certificate2("/app/https/https/clean-arch.pfx", "anything"));
});
});
webBuilder.UseStartup<Startup>();
});
At each stage I built the app in docker using...
\clean_architecture $ docker build -f ./src/WebUI/Dockerfile -t clean-architecture .
/clean_architecture $ docker run --rm -it -v /Users/monkey/src/csharp/clean_architecture/:/app/https/ -p 5000:5000 -p 5001:5001 --entrypoint /bin/bash clean-architecture
... and once running in bash (in docker), I used the following to start the application:
root#c5b4010d03be:/app# ./clean_architecture.WebUI
Good luck, hope that helps. Note, if it works in Docker, on your machine, it should work in Azure. I'll look at getting it going in Azure another day. Happy to upload my code to GitHub if it would help?
thanks to 0909EM for the huge effort in answering the question, but the solution was different.
I figured out what was going on. There are two issues.
The docer-compose.override.yml file looks like:
version: '3.4'
services:
webui:
environment:
- "ASPNETCORE_ENVIRONMENT=Development"
- "SpaBaseUrl=http://clientapp:4200"
clientapp:
image: ${DOCKER_REGISTRY-}clientapp
build:
context: src/WebUI/ClientApp
dockerfile: Dockerfile
depends_on:
- webui
restart: on-failure
db:
ports:
- "1433:1433"
notice the line dockerfile: Dockerfile in the src/webui/clientapp context. This dockerfile was overwriting the proper docker file in src/webui during the azure pipeline build. For some reason when I run the following command locally: docker-compose -f 'docker-compose.Production.yml' up --build it does not pull in the docker-compose.override settings, but the override settings do get used in the azure pipeline build.
Therefore, the angular dockerfile is the only one built and that image does not contain the .net core web api project. Which explains why I see the front end but cannot get to the api endpoints and also why the dockerfile has no .net core errors.
I was able to fix this in two ways.
First: rename the dockerfile in src/webui/clientapp to Dockerfile.clientapp and change the line in the docker.overrride file to dockerfile: Dockerfile.clientapp
SECOND: just remove the docker override file from the online repository that the azure pipeline pulls from.
As a result the proper dockerfile is used and the web api project is in the image.
The second issue: Now that the proper image is running, the .net core web api throws an error about loading the cert for identity server. This confirms my suspicion. Because this issue is not related to my original question about getting the web api running in the container, i have opened another question about it.