Docker - understanding start/run - c#

I know I have images and instance of these images - containers. But.
Consider I have image image created for .net core app. Here is docker file.
FROM microsoft/dotnet:1.1-sdk
WORKDIR /app
COPY . ./
RUN dotnet restore
RUN dotnet publish -c Release -o out
ENTRYPOINT ["dotnet", "out/dotnetapp.dll"]
My dotnetapp.dll throws exception at startup.
So, now I can buld image:
docker build . -t my-image
Then create and run image:
docker run --name my-container -t my-image
Everything as expected - my dotnetapp.dll is run and then exception is thrown (from inside app, as expected).
How I'd like to rerun (restart) my container.
So I thought that start should be enough:
docker start my-container
But nothing happens (my dotnetapp.dll was not run). I have also tried to stop and start again:
docker stop my-container
docker start my-container
run also doesn't help:
docker run --name my-container -t my-image
because container already exists.
So, till now my pattern to re-run app is:
docker rm my-container --force
docker run --name my-container -t my-image
But I don't think it is proper way.
So why start/stop doesn't work in this case? How to restart container so my dotnetapp.dll is run again without removing container?

Related

How to fix port error when dockerizing .Net API

I am trying to dockerize my .Net API and I cannot seem to get access to it after I create a container with it. I attempt to send a request using postman but I get a "Socket Hang Up" error. I believe this has to do with the ports I am using although I am not sure how to fix it. Below is all the information I could gather.
Dockerfile:
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /app
COPY WebAPI.csproj .
RUN dotnet restore "WebAPI.csproj"
COPY . ./
RUN dotnet publish "WebAPI.csproj" -c Release -o /publish
RUN dotnet build
FROM build AS final
WORKDIR /app
COPY --from=build /publish .
EXPOSE 5000
ENTRYPOINT ["dotnet", "WebAPI.dll"]
Commands:
docker build -t webapi:latest .
docker run -p 5000:5000 webapi:latest
Postman Proxy:
127.0.0.1:5000
P.S I have tried changing the ports in multiple ways, changing proxy settings for postman and nothing seems to work
Microsoft has set the environment variable ASPNETCORE_URLS to http://+:80/ in the aspnet image, which makes your application listen on port 80.
So your run command should map port 80 like this
docker run -p 5000:80 webapi:latest
Then your API will be available on http://localhost:5000/
Note that Swagger is only available when your application runs in Development mode and the Docker environment is not considered development. So by default, Swagger won't be available.
Update: Since I don't have your program source code, I've created the following Dockerfile that runs dotnet new to create a fresh template webapi project.
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
RUN dotnet new webapi -n WebAPI -o .
RUN dotnet publish -c Release -o /publish
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY --from=build /publish .
CMD ["dotnet", "WebAPI.dll"]
I then run the following commands to build, run and test the container
docker build -t test .
docker run --rm -d -p 5000:80 test
curl http://localhost:5000/WeatherForecast
and I get the expected result from the API.

'realpath(): Permission denied' when running dotnet ef command as non-root in dotnet/sdk:3.1-bullseye container on k8s

Problem
A container meant to run database migrations works locally, on Docker for Mac, but fails on Kubernetes, repeatedly logging
realpath(): Permission denied
Failed to resolve full path of the current executable [/proc/self/exe]
Reproduction
I have an image built from the following Dockerfiles (other descendents of base not included)
#Dockerfile.base
FROM mcr.microsoft.com/dotnet/sdk:3.1-bullseye
RUN dotnet tool install --global dotnet-ef
ENV PATH="$PATH:/root/.dotnet/tools"
ADD event_processor/dotnet /app
ADD classification_registry/topic_registry /app/topic_registry
WORKDIR /app
and
#Dockerfile.migrate
FROM app-base
ENV DOTNET_CLI_HOME=/app
RUN addgroup --system --gid 1000 app \
&& adduser --home /app --system --uid 2000 --ingroup app --shell /bin/sh appmigrate
RUN chown -R appmigrate /app
RUN chown -R appmigrate /root/.dotnet/tools
RUN chown -R appmigrate /tmp
USER appmigrate
ENV PATH="$PATH:/app/.dotnet/tools"
RUN dotnet tool install --global dotnet-ef
so if we let unique-image-ref be a unique tag for this built image, I am able to run the container locally, as I expect with
$ docker run -it --rm --user 2000 unique-image-ref dotnet-ef database update
Build started...
Build succeeded.
Configuring DB Access for migrations...
No migrations were applied. The database is already up to date.
Done.
So far so good. The problem arises in the Kubernetes cluster, when a Job is configured to run this container, with the following container definition
containers:
- name: my-app-migration
image: unique-image-ref
imagePullPolicy: Always
workingDir: /app
command: ["dotnet-ef"]
args:
- database
- update
envFrom:
- configMapRef:
name: app-conf
- secretRef:
name: app-secret
restartPolicy: OnFailure
securityContext:
runAsNonRoot: true
runAsUser: 2000
When I examine the logs from the container, I see nothing but the error (at the top of this post).
Any suggestions would be welcomed.
I did find a solution, though I cannot articulate exactly how this change effects the fix.
containers:
- name: my-app-migration
image: unique-image-ref
imagePullPolicy: Always
workingDir: /app
command: ["/bin/sh"]
args: ["-c", "dotnet-ef --project=Seismic.NotificationEventProcessor.RecoveryQueue database update"]
envFrom:
- configMapRef:
name: app-conf
- secretRef:
name: app-secret
restartPolicy: OnFailure
securityContext:
runAsNonRoot: true
runAsUser: 2000
So the fix is to invoke the command from a new shell, and pass the command to execute as a string (hence -c). But realistically, the details of this escape me.
I hope this helps someone else out with a very obscure error message. If anyone cares to expand on the underlying behavior, I would be glad to understand the relationships among docker, kubernetes, and bash in this regard.

Docker container on raspberry with linux/arm64

I have the next docker file
FROM mcr.microsoft.com/dotnet/core/runtime:3.1-buster-slim AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Pitman.csproj", ""]
RUN dotnet restore "./Pitman.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "Pitman.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Pitman.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Pitman.dll"]
and I'm running the build command:
docker buildx build --platform linux/arm64 -t latest .
tagging and pushing the image to docker hub
docker tag 8986ff79cb02 myid/pitman:latest
docker push myid/pitman
downloading the image on raspberry pi:
sudo docker pull myid/pitman:latest
and when I run the image
sudo docker run 8986ff79cb02
I get the next error:
standard_init_linux.go:211: exec user process caused "exec format error"
after building the image on my rpi I get this
Step 6/15 : RUN dotnet restore "./Pitman.csproj"
---> Running in 8562957be5d6
standard_init_linux.go:211: exec user process caused "exec format error"
The command '/bin/sh -c dotnet restore "./Pitman.csproj"' returned a non-zero code: 1
what am I missing?
This is the base image you're using for the final docker image:
mcr.microsoft.com/dotnet/core/runtime:3.1-buster-slim
That image only supports amd64 (not ARM). You need to choose another base image that either supports ARM-only or multi-arch (amd64 + arm64). For example, "latest" and "3.1.8" at time of writing are multi-arch.
I know for a fact that if you build an image on ARM (with an ARM base image) it will be able to run on any ARM system. I've never done multi-arch builds before, so I don't know if choosing a multi-arch base image, building on amd64, then trying to run on arm64, will work (but based on your comment it sounds like it did).

appsettings.json file not found issue on docker run

I am trying to build and run a docker image of the .Net Core application.
Here is what I tried so far:
Created a .Net Core Application (.Net Core 2.2)
Published the application using below command
dotnet publish -c Release
Created a Docker file with following instructions:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2
COPY myapp/bin/Release/netcoreapp2.2/publish/ app/
ENTRYPOINT ["dotnet", "app/myapp.dll"]
Built the docker image with following command
docker build -t planservice -f Dockerfile .
Here, Image got built successfully. But, when I run the image, I'm getting the error as below:
C:\app>docker run -it --rm planservice
Unhandled Exception: System.IO.FileNotFoundException: The configuration file 'appsettings.json' was not found and is not optional. The physical path i
s '/appsettings.json'.
at Microsoft.Extensions.Configuration.FileConfigurationProvider.Load(Boolean reload)
at Microsoft.Extensions.Configuration.FileConfigurationProvider.Load()
at Microsoft.Extensions.Configuration.ConfigurationRoot..ctor(IList`1 providers)
at Microsoft.Extensions.Configuration.ConfigurationBuilder.Build()
at myapp.Program.GetConfiguration() in C:\app\MFPServices\myapp\Program.cs:line 64
at myapp.Program.Main(String[] args) in C:\app\MFPServices\myapp\Program.cs:line 16
As per #Jawad suggestion, I have modified my Docker file to navigate to /app folder.
appsettings.js should be present in current location to run.
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2
COPY myapp/bin/Release/netcoreapp2.2/publish/ app/
WORKDIR /app
COPY . .
ENTRYPOINT ["dotnet", "myapp.dll"]
Now, it is working properly.
Caused because it's case sensitive
Try command:
docker exec -it <container number>
cp appsettings.json AppSettings.json

Docker container won't start

Created a simple application that connects to PostgreSQL but when I containerized it using Docker i cant seem to start it and no port/s were shown when i ran the container. What seems to be the problem?
The application works without the Docker but when I containerize it, that's where the problem arises.
docker build is successful and when I use docker run
it gives me a container but when I check it using docker ps -a no ports where shown
This is what I did on docker terminal
and this is my code when connecting to PostgreSQL db
class Program
{
static void Main(string[] args)
{
using (var connection = new NpgsqlConnection("Host=localhost;Username=postgres;Password=password;Database=user"))
{
connection.Open();
connection.Execute("Insert into customer (name) values ('Mikolasola');");
var value = connection.Query<string>("Select name from customer;");
Console.WriteLine(value.First());
}
Console.Read();
}
}
here's my docker file
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "webapplication1.dll"]
Edit: Changed my Docker file to this and somehow I can get the port now
FROM microsoft/dotnet:2-sdk AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -r linux-x64 -o out
FROM microsoft/dotnet:2-runtime-deps
WORKDIR /app
COPY --from=build-env /app/out ./
ENTRYPOINT ["./webapplication1"]
But I can't run it. Here's the screenshot
Any help would be much appreciated. Thanks!
PS: im a newbie
If you're using Docker for Mac (or Docker for Windows), to access anything on the host, you need to use host.docker.internal. More on that here.
Read the connection string from config and environment variables. That will allow you to override it when running the container.
PS: im a newbie
Pro tip for you :p
You might not want to use -d on the container you're working, so you see it's logs right there. Had you not used it, you'd have seen the process exit and not wondered why there's no port shown there :)
UPDATE:
Use the Dockerfile from the original question (before the edit).
I suggested you run it without -d:
docker run -p 8080:80 --name test webapp1
This will show you the aspnet app crash. You'll see what's happening. In this case since I know what's happening I can help, but in general you want to know what's happening. You do that by not running it in a detached state for small things like this, and using docker logs when there's a lot to parse. Anyway...
... you want the container to access the database on the host, not on its localhost. You do that by using host.docker.internal as the host. Eg:
Host=host.docker.internal;Username=postgres;Password=password;Database=user
This only works if you're using Docker for Windows or Docker for Mac. Doesn't work on Docker Toolbox or on linux AFAIK.
Now you need to figure out a way your application to use host.docker.internal when running within docker and localhost otherwise. My suggestion was to read it from config.

Categories