First time using GCP and I'm trying to upload a .NET project.
I cannot manage to publish the app using the integrated tools from Visual Studio, "Google Cloud Tools" and I get this message "Failed to deploy project to App Engine Flex".
I also tried using Cloud Run or the Google Cloud SDK Shell using gcloud app deploy app.yml but i get this error now:
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
I know it's coming from the Dockerfile but I don't know how to use and write it. Here it is:
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
COPY./sims/sims.csproj.sims/sims.csproj
COPY .sln./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o build --no-restore
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY --from= build./ build.
ENV ASPNETCORE_URLS = https://:8080
EXPOSE 8080
ENTRYPOINT ["dotnet", "sims.dll"]
Here's my .yaml file in case it might come from here:
runtime: aspnetcore
env: flex
# This sample incurs costs to run on the App Engine flexible environment.
# The settings below are to reduce costs during testing and are not appropriate
# for production use. For more information, see:
# https://cloud.google.com/appengine/docs/flexible/python/configuring-your-app-with-app-yaml
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
Do you have any advice ?
Also, concerning these two files, I don't even know if they're located in the right repository: project tree
Thanks for reading
I am trying to run a .Net Core based Azure Function inside a Docker container on a M1 Macbook without success so far.
Originally I used the Azure Function Core Tools CLI to create the function with the following command: func init LocalFunctionsProject --worker-runtime dotnet --docker which created the following Dockerfile.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS installer-env
COPY . /src/dotnet-function-app
RUN cd /src/dotnet-function-app && \
mkdir -p /home/site/wwwroot && \
dotnet publish *.csproj --output /home/site/wwwroot
# To enable ssh & remote debugging on app service change the base image to the one below
# FROM mcr.microsoft.com/azure-functions/dotnet:3.0-appservice
FROM mcr.microsoft.com/azure-functions/dotnet:3.0
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY --from=installer-env ["/home/site/wwwroot", "/home/site/wwwroot"]
Building the Docker image and running as Container works just fine on a amd based machine. In the meanwhile I got myself a M1 Macbook on which running the Azure Function inside a Docker Container does not work anymore. It gives me the following exception:
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Unhandled exception. System.IO.IOException: Function not implemented
at System.IO.FileSystemWatcher.StartRaisingEvents()
at System.IO.FileSystemWatcher.StartRaisingEventsIfNotDisposed()
at System.IO.FileSystemWatcher.set_EnableRaisingEvents(Boolean value)
at Microsoft.Extensions.FileProviders.Physical.PhysicalFilesWatcher.TryEnableFileSystemWatcher()
at Microsoft.Extensions.FileProviders.Physical.PhysicalFilesWatcher.CreateFileChangeToken(String filter)
at Microsoft.Extensions.FileProviders.PhysicalFileProvider.Watch(String filter)
at Microsoft.Extensions.Configuration.FileConfigurationProvider.<.ctor>b__1_0()
at Microsoft.Extensions.Primitives.ChangeToken.ChangeTokenRegistration`1..ctor(Func`1 changeTokenProducer, Action`1 changeTokenConsumer, TState state)
at Microsoft.Extensions.Primitives.ChangeToken.OnChange(Func`1 changeTokenProducer, Action changeTokenConsumer)
at Microsoft.Extensions.Configuration.FileConfigurationProvider..ctor(FileConfigurationSource source)
at Microsoft.Extensions.Configuration.Json.JsonConfigurationSource.Build(IConfigurationBuilder builder)
at Microsoft.Extensions.Configuration.ConfigurationBuilder.Build()
at Microsoft.AspNetCore.Hosting.WebHostBuilder.BuildCommonServices(AggregateException& hostingStartupErrors)
at Microsoft.AspNetCore.Hosting.WebHostBuilder.Build()
at Microsoft.Azure.WebJobs.Script.WebHost.Program.BuildWebHost(String[] args) in /src/azure-functions-host/src/WebJobs.Script.WebHost/Program.cs:line 35
at Microsoft.Azure.WebJobs.Script.WebHost.Program.Main(String[] args) in /src/azure-functions-host/src/WebJobs.Script.WebHost/Program.cs:line 25
qemu: uncaught target signal 6 (Aborted) - core dumped
What I tried so far
Forcing Docker to use amd based base image by adding --platform=linux/amd64 to the FROM section of the first stage in the Dockerfile which gives me the following error message:
> [installer-env 3/3] RUN cd /src/dotnet-function-app && mkdir -p /home/site/wwwroot && dotnet publish *.csproj --output /home/site/wwwroot:
#9 3.258 Microsoft (R) Build Engine version 16.7.2+b60ddb6f4 for .NET
#9 3.258 Copyright (C) Microsoft Corporation. All rights reserved.
#9 3.258
#9 3.441 qemu: uncaught target signal 11 (Segmentation fault) - core dumped
#9 3.456 Segmentation fault
------
executor failed running [/bin/sh -c cd /src/dotnet-function-app && mkdir -p /home/site/wwwroot && dotnet publish *.csproj --output /home/site/wwwroot]: exit code: 139
Changing the base image of the second stage to mcr.microsoft.com/azure-functions/dotnet:3.0-arm32v7 which gave me the following error:
WARNING: The requested image's platform (linux/arm) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
A fatal error occurred, the folder [/usr/share/dotnet/host/fxr] does not contain any version-numbered child folders
Conclusion
My personal conclusion from how I understand the problem is that probably it will not work on my M1 machine unless there is a dedicated azure-function/dotnet image for arm64 v8 machines. If I am completely wrong, please hint me to the right direction.
I have a Docker Swarm of four Ubuntu 20.04 machines. I want to run many different apps in a replicated fashion. One of these apps is an API that reads from a remote SQL Server running on a named instance.
I am able to connect to SQL Server if I specify the host network mode but this is not ideal since I won't be able to map ports for other services that I want to run in the future. It also feels like a hack.
This is my docker-compose file named scapi-stack.yaml
version: '3.7'
services:
sc-api:
command: [ "--privileged" ]
image: #private repo image
deploy:
replicas: 4
restart_policy:
condition: on-failure
ports:
- "51955:51955" #SQL Server instance port
- "8443:443"
- "8080:80"
environment:
- ASPNETCORE_URLS=http://+:80
- ASPNETCORE_URLS=https://+:443
This is the command with which I run the service:
docker stack deploy --compose-file scapi-stack.yaml scapi
If I then do docker attach to a container and navigate to port 8080 I see the following error:
Microsoft.Data.SqlClient.SqlException (0x80131904): A network-related or instance-specific error occurred
while establishing a connection to SQL Server. The server was not found or was not accessible.
Verify that the instance name is correct and that SQL Server is configured to allow remote connections.
(provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
I've tried specifying the --privileged command to no avail.
I've also tried changing images from the main ones to bionic, focal and buster-slim.
I've tried going down to .Net Core 3.1 as well and it made no difference.
I have also run these two commands found in this answer:
sysctl net.ipv4.conf.all.forwarding=1
sudo iptables -P FORWARD ACCEPT
The only way I can get it to work is with the host network but that defeats the purpose.
The dockerfile for the image is this:
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
EXPOSE 51955
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["SCAPI/SCAPI.csproj", "SCAPI/"]
COPY ["src/Infrastructure/Infrastructure.csproj", "src/Infrastructure/"]
COPY ["src/Application/Application.csproj", "src/Application/"]
COPY ["src/Domain/Domain.csproj", "src/Domain/"]
RUN dotnet restore "SCAPI/SCAPI.csproj"
COPY . .
WORKDIR "/src/SCAPI"
RUN dotnet build "SCAPI.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "SCAPI.csproj" -c Release -o /app/publish
FROM base AS final
#Enable connections with TLS 1.0
RUN sed -i 's/MinProtocol = TLSv1.2/MinProtocol = TLSv1/g' /etc/ssl/openssl.cnf
RUN sed -i 's/MinProtocol = TLSv1.2/MinProtocol = TLSv1/g' /usr/lib/ssl/openssl.cnf
RUN sed -i 's/DEFAULT#SECLEVEL=2/DEFAULT#SECLEVEL=1/g' /etc/ssl/openssl.cnf
RUN sed -i 's/DEFAULT#SECLEVEL=2/DEFAULT#SECLEVEL=1/g' /usr/lib/ssl/openssl.cnf
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "SCAPI.dll"]
What am I missing? How can I specify the ports to enable comms to SQL Server without using the host network?
Update
The SQL server I am trying to connect to is on a Windows machine outside of the swarm. I can connect to it if I remove port mapping and specify 'host' networking in my compose file:
version: '3.7'
services:
sc-api:
command: [ "--privileged" ]
image: #private repo image
deploy:
replicas: 4
restart_policy:
condition: on-failure
environment:
- ASPNETCORE_URLS=http://+:80
- ASPNETCORE_URLS=https://+:44
networks:
- host
networks:
host:
name: host
external: true
But that has drawbacks as described in the official Docker documentation:
If you use the host network mode for a container, that container’s network stack is not isolated from the Docker host (the container shares the host’s networking namespace), and the container does not get its own IP-address allocated. For instance, if you run a container which binds to port 80 and you use host networking, the container’s application is available on port 80 on the host’s IP address.
Leaving this for posterity
So after close to 60 hours spent trawling the internet looking for clues and trying a million different things, this has finally been resolved.
I even switched to a Kubernetes cluster spun up on the same machines to no avail - I had the same issue.
My network-guru colleague finally got me to install Wireshark on one of the linux boxes and he discovered that the traffic was going out of the containers/pods, through the linux box interface and to the SQL server, but was not coming back.
After some time he discovered that the IP addresses coming out of swarm/kubernetes weren't masqueraded and the core network switch didn't know how to return traffic back to the containers/pods.
These linux boxes are virtual machines.
A quick sudo iptables --append POSTROUTING --table nat --out-interface ens160 --jump MASQUERADE
where 'ens160' is the network interface - and voila! All good.
This command translates all the container/pod IP addresses to the IP address of the box and vice-versa for all outbound traffic.
I am trying to "dockerize" this clean architecture template for .net Core 3. I use the docker pull request here as the base for my proff of concept app. This is a .net core 3 webapi project with an Angular front end as the client app.
WHAT I HAVE:
The base code from the pull request works locally.
An initial problem I had to overcome was setting the cert for identity server 4 in a local non development env, I had to mount a volume with the cert and reference it from the appsettings.json file like
"IdentityServer": {
"Key": {
"Type": "File",
"FilePath": "/security/mycert.pfx",
"Password": "MyPassword"
}
}
I set up a CI/CD pipeline in azure to build the project and deploy the image to an azure container registry
I set up a CI/CD release to deploy the docker image to a Web App for Containers (Linux) web app. Both these steps work properly
MY PROBLEM:
The web app loads and runs the container and the angular front end is shown. However, it appears that the web api is not running. Any attempt to hit an endpoint of the web api returns the following error in the browser console:
GET https://.....azurewebsites.net/_configuration/CleanArchitecture.WebUI 404 (Not Found)
Error: Uncaught (in promise): Error: Could not load settings for 'CleanArchitecture.WebUI' Error: Could not load settings for 'CleanArchitecture.WebUI'
CleanArchitecture.WebUI is the name of the assembly that is the entry point in the dockerfile:
ENTRYPOINT ["dotnet", "CleanArchitecture.WebUI.dll"]
All other aspects of the front end work properly, only calls to "backend" api fail.
Another issue is that if I get the docker logs from the azure container, there are no errors shown.
WHAT I TRIED
I tried to add "dotnet CleanArchitecture.WebUI.dll" to the startup command of the container in the container settings of the web app, but that just throws an error that it can't find CleanArchitecture.WebUI.dll
I have tried to increase the logging level ("LogLevel": "Default": "Debug") to get more details, but no additional error details are shown in the docker logs.
It might be an error loading the Identity Server 4 certificate, but there are no errors to confirm this problem.
Here is my docker compose file that is used by the azure pipeline:
version: '3.4'
services:
webui:
image: ${DOCKER_REGISTRY-}webui
build:
context: .
dockerfile: src/WebUI/Dockerfile
environment:
- "UseInMemoryDatabase=false"
- "ASPNETCORE_ENVIRONMENT=Production"
- "ConnectionStrings__DefaultConnection=myconnection"
- "ASPNETCORE_Kestrel__Certificates__Default__Password=mypass"
- "ASPNETCORE_Kestrel__Certificates__Default__Path=/security/mycert.pfx"
ports:
- "5000:5000"
- "5001:5001"
volumes:
- mcpdata:"/security:/"
restart: always
mcpdata is the name of the azure file share that gets mounted and contains the actual cert
here is my azure-pipeline.yml for the CI/CD:
trigger:
- staging
resources:
- repo: self
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: '****'
imageRepository: 'cleanarchitecture'
containerRegistry: '****.azurecr.io'
dockerComposeFilePath: '$(Build.SourcesDirectory)docker-compose.Production.yml'
tag: '$(Build.BuildId)'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerComposeFile: $(dockerComposeFilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: staging
QUESTION?
Can someone help me figure out why it appears like my web api is not running but no errors are thrown. At a minimum I would be happy if someone could help me see the errors in the docker logs.
thanks in advance
I tried to repeat, with "clean architecture" using the following (note, I'm using zsh on MacOS, but the same should work on Windows/Linux too):
take clean_architecture
dotnet new --install Clean.Architecture.Solution.Template
dotnet new ca-sln
The documentation suggests, clicking F5 in Visual Studio will start the template, although I had to do:
cd src/WebUI/ClientApp
npm install
At this point the app starts locally by hitting F5. Note, what happens here is that ASP.Net Core forwards requests to the dev server, so effectively, this does ng serve --port 53543 AND starts Asp.Net Core (Kestrel in my case) on port 5001, browsing to http://127.0.0.1:53543 provides the angular page directly. Browsing to https://localhost:5001 brings up the same angular page, as forwarded by ASPNetCore to Angular. All very confusing... Detailed more here
Note in Startup.cs the following lines of code exist, these are usually set based on the environment variable ASPNETCORE_ENVIRONMENT
if (!env.IsDevelopment())
{
app.UseSpaStaticFiles();
}
-- and within "app.UseSpa"
if (env.IsDevelopment())
{
spa.UseAngularCliServer(npmScript: "start");
}
Anyway, it looks like you've got that environment variable set to Production, which should just serve the built files from the ClientApp\dist folder (rather than forwarding to the dev server) that suggests that if you see the Angular, then the .Net Core service is running... I'll try and rebuild the Dockerfiles first...
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
ENV ASPNETCORE_URLS=https://+:5001;http://+:5000
WORKDIR /app
EXPOSE 5000
EXPOSE 5001
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt install -y nodejs
WORKDIR /src
COPY ./src/WebUI/WebUI.csproj src/WebUI/
COPY ./src/Application/Application.csproj src/Application/
COPY ./src/Domain/Domain.csproj src/Domain/
COPY ./src/Infrastructure/Infrastructure.csproj src/Infrastructure/
RUN dotnet restore "src/WebUI/WebUI.csproj"
COPY . .
WORKDIR "/src/src/WebUI"
RUN dotnet build "WebUI.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "WebUI.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CleanArchitecture.WebUI.dll"]
Then build and run as follows:
# build takes a while
docker build -f ./src/WebUI/Dockerfile -t clean-architecture .
# note, this fails first time, because I set up as clean_architecture so the entry point is incorrect
docker run --rm -it -p 5000:5000 -p 5001:5001 clean-architecture
# run the container and override the entrypoint
docker run --rm -it --entrypoint /bin/bash clean-architecture
# From within the container...
root#93afb0ad21c5:/app# dotnet clean_architecture.WebUI.dll
# note, in .Net 3.1, you can also do this directly, as follows:
root#93afb0ad21c5:/app# ./clean_architecture.WebUI
Now there is a problem with LocalDB: System.PlatformNotSupportedException: LocalDB is not supported on this platform.
Switch appsettings.Production.json to be "UseInMemoryDatabase": true
The problem then appears to be certificates...
I created a certificate using:
dotnet dev-certs https -ep ./https/clean-arch.pfx -p anything
For IdentityServer, I change appSettings.Production.json as follows:
"IdentityServer": {
"Key": {
"Type": "File",
"FilePath": "/app/https/https/clean-arch.pfx",
"Password": "anything"
}
}
and then running on Linux, probably means running Kestrel, which means we need to provide HTTPS certs there too, which I did by setting the following in Program.cs
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.ConfigureKestrel((context, options) =>
{
options.AllowSynchronousIO = true;
options.Listen(IPAddress.Loopback, 5000, listenOptions =>
{
listenOptions.UseConnectionLogging();
listenOptions.Protocols = Microsoft.AspNetCore.Server.Kestrel.Core.HttpProtocols.Http1AndHttp2;
});
options.Listen(IPAddress.Any, 5001, listenOptions =>
{
listenOptions.UseConnectionLogging();
listenOptions.Protocols = Microsoft.AspNetCore.Server.Kestrel.Core.HttpProtocols.Http1AndHttp2;
listenOptions.UseHttps(new System.Security.Cryptography.X509Certificates.X509Certificate2("/app/https/https/clean-arch.pfx", "anything"));
});
});
webBuilder.UseStartup<Startup>();
});
At each stage I built the app in docker using...
\clean_architecture $ docker build -f ./src/WebUI/Dockerfile -t clean-architecture .
/clean_architecture $ docker run --rm -it -v /Users/monkey/src/csharp/clean_architecture/:/app/https/ -p 5000:5000 -p 5001:5001 --entrypoint /bin/bash clean-architecture
... and once running in bash (in docker), I used the following to start the application:
root#c5b4010d03be:/app# ./clean_architecture.WebUI
Good luck, hope that helps. Note, if it works in Docker, on your machine, it should work in Azure. I'll look at getting it going in Azure another day. Happy to upload my code to GitHub if it would help?
thanks to 0909EM for the huge effort in answering the question, but the solution was different.
I figured out what was going on. There are two issues.
The docer-compose.override.yml file looks like:
version: '3.4'
services:
webui:
environment:
- "ASPNETCORE_ENVIRONMENT=Development"
- "SpaBaseUrl=http://clientapp:4200"
clientapp:
image: ${DOCKER_REGISTRY-}clientapp
build:
context: src/WebUI/ClientApp
dockerfile: Dockerfile
depends_on:
- webui
restart: on-failure
db:
ports:
- "1433:1433"
notice the line dockerfile: Dockerfile in the src/webui/clientapp context. This dockerfile was overwriting the proper docker file in src/webui during the azure pipeline build. For some reason when I run the following command locally: docker-compose -f 'docker-compose.Production.yml' up --build it does not pull in the docker-compose.override settings, but the override settings do get used in the azure pipeline build.
Therefore, the angular dockerfile is the only one built and that image does not contain the .net core web api project. Which explains why I see the front end but cannot get to the api endpoints and also why the dockerfile has no .net core errors.
I was able to fix this in two ways.
First: rename the dockerfile in src/webui/clientapp to Dockerfile.clientapp and change the line in the docker.overrride file to dockerfile: Dockerfile.clientapp
SECOND: just remove the docker override file from the online repository that the azure pipeline pulls from.
As a result the proper dockerfile is used and the web api project is in the image.
The second issue: Now that the proper image is running, the .net core web api throws an error about loading the cert for identity server. This confirms my suspicion. Because this issue is not related to my original question about getting the web api running in the container, i have opened another question about it.
I'm trying to get a service on-boarded to Docker containers.
Target Framework - .NET Framework 4.6.1
Output type - Console Application
When I initially tried to right click -> Add -> Docker Support on the project in the Visual Studio 2017 sln I got a prompt "You cannot add Docker support to this project type"
I went ahead and created a Dockerfile by hand and placed it in the root. This is what it looks like:
FROM microsoft/windowsservercore
SHELL ["powershell", "-command"]
RUN Add-WindowsFeature NET-Framework-45-ASPNET, Web-Asp-Net45
EXPOSE 80
EXPOSE 5000
EXPOSE 13134
ADD ./bin/Debug/net461/win7-x64 .
COPY ./bin/cert-that-needs-to-be-installed.pfx /cert-that-needs-to-be-installed.pfx
RUN $Secure_String_Pwd = ConvertTo-SecureString "thePasswordToTheCert!" -AsPlainText -Force; \
Import-PfxCertificate -FilePath .\cert-that-needs-to-be-installed.pfx -CertStoreLocation Cert:\LocalMachine\My -Exportable -Password $Secure_String_Pwd;
ENV ASPNETCORE_ENVIRONMENT="Development"
ENTRYPOINT ["./ServiceName.exe"]
Upon doing the following, I was able to successfully create an image and also a container with the service running on it: (the prod version listens on port 5000 while the test one listens on 13134 and those ports are also open in my localhost Firewall)
docker build -t ServiceName .; docker run -p 5000:5000 -p 13134:13134 -it ServiceName
I am able to get the ip of the container by doing this:
docker inspect <containerId>
Now when I try to test the service by doing a HttpGet call to an API using Postman I don't get a response. I have the docker container open in my cmd prompt and I don't see my request coming in either.
My service works fine on a Windows VM.
Since my GET calls don't even seem to reach the container I fear I am doing the port mapping wrong/or is it something else?
From what I know you don't need the ip of the container, the ports have been projected on your localhost, you should get a response when you make REST calls to localhost:5000/localhost:13134