I created a docker volume with:
docker volume create my-volume-name
In my console app running in the docker container, I use a creation config like this:
{
"HostConfig": {
"Binds": [
"my-volume-name:/app/my-volume-name:rw"
],
"Privileged": true
}
}
The container sees the volume, but doesn't have any permissions to write to it.
It gives me a permission denied error. (exception below is me trying to create a file in the volume).
//create a file
try{
string path = Path.Combine(Directory.GetCurrentDirectory(), "my-volume-name","example.txt");
Console.WriteLine($"PATH {path}");
if (!File.Exists(path)){
File.Create(path);
TextWriter tw = new StreamWriter(path);
tw.WriteLine($"{DateTime.Now.ToString()}The very first line!");
tw.Close();
}else if (File.Exists(path)){
using(var tw = new StreamWriter(path, true)){tw.WriteLine($"{DateTime.Now.ToString()}The next line!");}
}
}catch(Exception e){Console.WriteLine($"ERROR:{e.ToString()}");}
This is what I get:
Access to the path '/app/my-volume-name/example.txt' is denied. ---> System.IO.IOException: Permission denied
What am I missing to be able to write to the volume I created with the docker volume create command? The goal here is to have a shared volume at the host that certain containers I've created can read and write into it.
edit (the dockerfile appended):
FROM microsoft/dotnet:2.0-runtime-stretch AS base
RUN apt-get update && \
apt-get install -y --no-install-recommends unzip procps && \
rm -rf /var/lib/apt/lists/*
RUN useradd -ms /bin/bash moduleuser
USER moduleuser
RUN curl -sSL https://aka.ms/getvsdbgsh | bash /dev/stdin -v latest -l ~/vsdbg
FROM microsoft/dotnet:2.0-sdk AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Debug -o out
FROM base
WORKDIR /app
COPY --from=build-env /app/out ./
#moving down because CI/CD builds in VSTS didnt like two copies next to each other.
ENTRYPOINT ["dotnet", "ResourceModule.dll"]
When I ls -l in the container, I see this:
drwxr-xr-x 3 root root 4096 Jul 2 13:28 my-volume-name
edit 2:
When I remove the creation of moduleuser in the dockerfile, I get this error:
ERROR:System.IO.IOException: The process cannot access the file '/app/my-volume-name/example.txt' because it is being used by another process.
But it was created - so it looks like I'm on the right path.
docker volume create my-volume-name will create space with root privileges. Please check in container with which user, your app is running. Either update the ownership of volume or run application with root user. if need further help, please provide dockerfile used for application.
Related
I have a nuget package stored on my local filesystem at C:\Temp. When I add the nuget package to my Visual Studio Solution everything builds as expected. However when I run my docker build command, I receive this error:
MyProject depends on NugetExample.DLL (>= 0.0.4) but NugetExample.DLL 0.0.4 was not found.
Heres a copy of my Dockerfile
FROM mcr.microsoft.com/dotnet/sdk:6.0 as build
WORKDIR /build
EXPOSE 80
EXPOSE 443
EXPOSE 3000
COPY . .
# Restore/build...
# Use debug publish when you need to debug the service running in docker
RUN dotnet publish src/app/MyProject.csproj -c Debug -o /app
# Stage 2
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY --from=build /app .
RUN sed -i -e "s|^MinProtocol = .*|MinProtocol = TLSv1.0|g" "/etc/ssl/openssl.cnf"
ENTRYPOINT ["dotnet", "MyProject.dll"]
Is there a way to get docker to find my nuget package stored on my local filesystem?
To pull a NuGet from your local filesystem it needs to be in your Docker build context: https://docs.docker.com/build/building/context/
Alternatively, you can use the new dotnet publish support for making a container image (https://learn.microsoft.com/en-us/dotnet/core/docker/publish-as-container) which does a local build and then copies it into a container image for you. One caveat for your use, to get maintain the effect of:
RUN sed -i -e "s|^MinProtocol = .*|MinProtocol = TLSv1.0|g" "/etc/ssl/openssl.cnf"
You would need to create a base image based on your current stage 2 and then configure publish to use it with ContainerBaseImage
I just created a dummy azure function with default flavor. Here is my Dockerfile default from VS.
FROM mcr.microsoft.com/azure-functions/dotnet:4 AS base
WORKDIR /home/site/wwwroot
EXPOSE 80
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["FunctionApp1/FunctionApp1.csproj", "FunctionApp1/"]
RUN dotnet restore "FunctionApp1/FunctionApp1.csproj"
COPY . .
WORKDIR "/src/FunctionApp1"
RUN dotnet build "FunctionApp1.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "FunctionApp1.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /home/site/wwwroot
COPY --from=publish /app/publish .
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
SO I executed these 2 lines of code to create the docker file which works perfectly fine for API project but not for Azure Function project.
docker build -t function1 -f FunctionApp1/Dockerfile .
docker run -it --rm -p 8080:80 --name FunctionApp1_sample function1:latest
It keeps throwing error below
The listener for function 'Function1' was unable to start.
Microsoft.Azure.WebJobs.Host.Listeners.FunctionListenerException: The listener for function 'Function1' was unable to start.
---> System.InvalidOperationException: Could not create BlobContainerClient for ScheduleMonitor
at Microsoft.Azure.WebJobs.Extensions.Timers.StorageScheduleMonitor.get_ContainerClient() in C:\azure-webjobs-sdk-extensions\src\WebJobs.Extensions.Timers.Storage\StorageScheduleMonitor.cs:line 83
at Microsoft.Azure.WebJobs.Extensions.Timers.StorageScheduleMonitor.GetStatusBlobClient(String timerName, Boolean createContainerIfNotExists) in C:\azure-webjobs-sdk-extensions\src\WebJobs.Extensions.Timers.Storage\StorageScheduleMonitor.cs:line 155
at Microsoft.Azure.WebJobs.Extensions.Timers.StorageScheduleMonitor.GetStatusAsync(String timerName) in C:\azure-webjobs-sdk-extensions\src\WebJobs.Extensions.Timers.Storage\StorageScheduleMonitor.cs:line 94
I also have tried to set the following environment variable explicitly in the Docker file and still not working. any Suggestions? Thanks.
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true \
AzureWebJobsStorage="UseDevelopmentStorage=true" \
FUNCTIONS_WORKER_RUNTIME=dotnet
By following the MS Doc given by #peinearydevelopment, I have built the function app with the docker file and published to docker container and then to Azure Function App. Also, enabled the continuous deployment along with SSH Connection.
docker run --rm -it -p 8080:80 harikr572/azurefunctionsimage:v1.0.0
Docker File:
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS installer-env
# Build requires 3.1 SDK
COPY --from=mcr.microsoft.com/dotnet/core/sdk:3.1 /usr/share/dotnet /usr/share/dotnet
FROM mcr.microsoft.com/azure-functions/dotnet:3.0-appservice
COPY . /src/dotnet-function-app
RUN cd /src/dotnet-function-app && \
mkdir -p /home/site/wwwroot && \
dotnet publish *.csproj --output /home/site/wwwroot
FROM mcr.microsoft.com/azure-functions/dotnet:4
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY --from=installer-env ["/home/site/wwwroot", "/home/site/wwwroot"]
I have a console application published in linux, the application reads the data from a particular directory in Linux. so if I want to run the console application, I would do the below
./myapp "/home/user1/mydata"
Files in mydata directory will be changing. It all works fine when I run the application directly in the Linux terminal.
But when I dockerize the application, I am unable to read the directory "/home/user1/mydata".
Below is Dockerfile contents
FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base
WORKDIR /app
COPY . .
ENTRYPOINT ["./myapp"]
my intention is when I run the docker image, I will also the include path of the directory
for example
docker run myimage:latest "/home/user1/mydata"
I understand that in order to read the directory, I need first mount the directory, so I created a volume
docker volume create myvolume
and then mounted my target directory
docker run -t -d -v my-vol:/home/user1/mydata --name mycontainer myimage:latest
even after mounting when I am running the docker as
docker run myimage:latest "/home/user1/mydata"
It is still unable to read the directory. Am I doing something wrong here ? After mounting the directory do I have to change the way I call my argument in this case /home/user1/mydata ?
docker volume create myvolume will create a folder in docker system location, while -v my-vol:/home/user1/mydata will pop the /home/user1/mydata in container to that docker host's system location, typically /var/lib/docker/volumes.
So, for your case, you need to use bind mount, not volume, something like next:
docker run -t -d -v /home/user1/mydata:/home/user1/mydata --name mycontainer myimage:latest "/home/user1/mydata"
-v /home/user1/mydata:/home/user1/mydata will mount the folder /home/user1/mydata on docker host to container's /home/user1/mydata, so this I guess could meet your requirement.
I have following Dockerfile in my .NET Core 2.2 console application.
FROM mcr.microsoft.com/dotnet/core/runtime:2.2-stretch-slim AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["TaikunBillerPoller.csproj", ""]
RUN dotnet restore "TaikunBillerPoller.csproj"
COPY . .
WORKDIR "/src/"
RUN dotnet build "TaikunBillerPoller.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "TaikunBillerPoller.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "TaikunBillerPoller.dll"]
My .dockerignore file looks like
**/.dockerignore
**/.env
**/.git
**/.gitignore
**/.vs
**/.vscode
**/*.*proj.user
**/azds.yaml
**/charts
**/bin
**/obj
**/Dockerfile
**/Dockerfile.develop
**/docker-compose.yml
**/docker-compose.*.yml
**/*.dbmdl
**/*.jfm
**/secrets.dev.yaml
**/values.dev.yaml
**/.toolstarget
We are using GitLab and Kaniko for building gitlab-ci.yml file.
This console application takes 7 minutes to build, but another application written in the Go language takes 40 seconds.
How might I reduce the build time for this application?
Your first FROM line is completely unused. Instead change your FROM base line to FROM mcr.microsoft.com/dotnet/core/runtime:2.2-stretch-slim
This issue may be due to the fact that Kaniko **/someDir .dockerignore patterns are not properly observed. I'm noticing that /obj, /bin, .idea (rider) and .git folders are all being copied.
https://github.com/GoogleContainerTools/kaniko/issues/1396
You are also not using the alpine based sdk and runtime images.
In the dotnet restore command you can use the --no-cache flag because docker layer cacheing will take care of that.
dotnet publish does a build so you can skip calling dotnet build. If you want to perform testing you can call dotnet test then
You are explicitly calling dotnet restore so in all subsequent dotnet commands you can use the --no-restore option.
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-alpine AS base
#Add whatever tools you need to the base image
RUN apk add --update --no-cache git bash curl zip; \
export PATH="$PATH:/root/.dotnet/tools"; \
dotnet tool install --global dotnet-xunit-to-junit --version 1.0.2
FROM base AS restore
WORKDIR /src
COPY ["TaikunBillerPoller.csproj", ""]
RUN dotnet restore --no-cache "TaikunBillerPoller.csproj"
COPY . .
FROM restore as publish
ARG VERSION="0.0.0"
RUN dotnet test "TaikunBillerPoller.csproj" --configuration Release --no-restore
RUN dotnet publish "TaikunBillerPoller.csproj" --output /app --configuration Release --no-restore /p:Version=$VERSION
FROM mcr.microsoft.com/dotnet/core/runtime:2.2-alpine AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "TaikunBillerPoller.dll"]
On a 2015 Mac I have an asp.net microservice that builds, tests, publishes and creates a beanstalk_bundle zip using a normal docker build with the following times:
51s No cache
22s Code change
<1s No code change (pipeline yml change)
Kaniko adds overhead because layer caching is done remotely to some repository (typically).
This time is going to depend a lot on how you have your Kaniko cache and mounted volumes configured. Here is something I use on my local machine for debugging.
#!/bin/bash
# Assuming this is either not an ephemeral machine, or the ephemeral machine
# maps the cache directory to permanent volume.
# We cache images into the local machine
# so that the Kaniko container, which is ephemeral, does not have to pull them each time.
docker run -v $(pwd):/workspace gcr.io/kaniko-project/warmer:latest \
--cache-dir=/workspace/cache \
--image=mcr.microsoft.com/dotnet/core/sdk:2.2-alpine \
--image=mcr.microsoft.com/dotnet/core/aspnet:2.2-alpine
docker run -it --rm \
-v `pwd`:/workspace \
-v `pwd`/kaniko-config.json:/kaniko/.docker/config.json:ro \
-v `pwd`/reports:/reports \
-v `pwd`/beanstalk_bundle:/beanstalk_bundle \
gcr.io/kaniko-project/executor:latest \
--dockerfile "buildTestPublish.Dockerfile" \
--destination "registry.gitlab.com/somePath/theImageName:theVersion" \
--skip-unused-stages \
--cache \
--cache-dir=/workspace/cache \
--verbosity=trace
Created a simple application that connects to PostgreSQL but when I containerized it using Docker i cant seem to start it and no port/s were shown when i ran the container. What seems to be the problem?
The application works without the Docker but when I containerize it, that's where the problem arises.
docker build is successful and when I use docker run
it gives me a container but when I check it using docker ps -a no ports where shown
This is what I did on docker terminal
and this is my code when connecting to PostgreSQL db
class Program
{
static void Main(string[] args)
{
using (var connection = new NpgsqlConnection("Host=localhost;Username=postgres;Password=password;Database=user"))
{
connection.Open();
connection.Execute("Insert into customer (name) values ('Mikolasola');");
var value = connection.Query<string>("Select name from customer;");
Console.WriteLine(value.First());
}
Console.Read();
}
}
here's my docker file
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "webapplication1.dll"]
Edit: Changed my Docker file to this and somehow I can get the port now
FROM microsoft/dotnet:2-sdk AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -r linux-x64 -o out
FROM microsoft/dotnet:2-runtime-deps
WORKDIR /app
COPY --from=build-env /app/out ./
ENTRYPOINT ["./webapplication1"]
But I can't run it. Here's the screenshot
Any help would be much appreciated. Thanks!
PS: im a newbie
If you're using Docker for Mac (or Docker for Windows), to access anything on the host, you need to use host.docker.internal. More on that here.
Read the connection string from config and environment variables. That will allow you to override it when running the container.
PS: im a newbie
Pro tip for you :p
You might not want to use -d on the container you're working, so you see it's logs right there. Had you not used it, you'd have seen the process exit and not wondered why there's no port shown there :)
UPDATE:
Use the Dockerfile from the original question (before the edit).
I suggested you run it without -d:
docker run -p 8080:80 --name test webapp1
This will show you the aspnet app crash. You'll see what's happening. In this case since I know what's happening I can help, but in general you want to know what's happening. You do that by not running it in a detached state for small things like this, and using docker logs when there's a lot to parse. Anyway...
... you want the container to access the database on the host, not on its localhost. You do that by using host.docker.internal as the host. Eg:
Host=host.docker.internal;Username=postgres;Password=password;Database=user
This only works if you're using Docker for Windows or Docker for Mac. Doesn't work on Docker Toolbox or on linux AFAIK.
Now you need to figure out a way your application to use host.docker.internal when running within docker and localhost otherwise. My suggestion was to read it from config.