I have two services in my solution, which are ASP.NET projects. One of them making actions with database(service named Topics), and another should be a gateway. I am deploying to docker both services with Visual Studio and not writing any commands to run projects.
In Topics service in Launchsettings.json i am defining ssl and http port, and while it runs in docker i can use this service in browser with no problem by address https://localhost:port/something_here.
Here it is a part of Launchsettings.json in Topics service:
"commandName": "Docker",
"launchBrowser": true,
"launchUrl": "{Scheme}://{ServiceHost}:{ServicePort}/topics",
"httpPort": 44381,
"sslPort": 44382,
"publishAllPorts": true,
"useSSL": true
}
But later, when i trying to contact from Gateway service(which is another one docker container) i am receiving an error, that it can't contact my Topics service.
In Gateway i am trying to connect to http://localhost:44381/topics and https://localhost:44382/topics and from none of them i can't receive an answer.
What should i do to be able to connect my docker containers on my localhost?
Localhost is a reserved hostname that always should refer to the machine itself. Within a Docker container, localhost will refer to its own container, not to the host computer. If you want to connect to another Docker container, you can use the hostname of the Docker container.
The hostname of a Docker container is usually the container name, but I saw a container created by Visual Studio does not have the container name configured as Alias on the network. If you use docker-compose as described on https://learn.microsoft.com/en-us/visualstudio/containers/tutorial-multicontainer?view=vs-2019, it is probably a lot easier. Otherwise you might be able to set a hostname via the commandLineArgs property in launchSettings.json.
If you have a container FrontEnd and and container Topics, you can connect from the FrontEnd container by using https://topics. Now please be aware the portnumber is probably also different than the one you are using now! You https://localhost:44382 is probably forwarded to https://topics:443 (since 443 is the default https-port, this is identical to https://topics).
Edit: don't get confused if you also want to do AJAX-calls. In that case the call is done from your browser, which is running on the Host OS in which case you need to use https://localhost:44382 again.
The final way to solve my problem was to run docker doing build and exposing port manually. Visual Studio was doing something wrong, which caused that it was possible to connect to container manually, but impossible with another docker container. So it was mistake to use built-in way to run docker in VS.
Related
I'm deploying this project (GitHub) locally on k3d Kubernetes Cluster. It includes a Helm chart. There is also a documentation for this example which can be found here.
What I have done so far is what's below. It works just fine. The problem is the ClusterIPs it gives are internal for k8s and I can't access them outside of the cluster. What I want is to be able to run them on my machine's browser. I was told that I need a nodeport or a loadbalancer to do that. How can I do that?
// Build Docker Images
// Navigate to root directory -> ./ProtoClusterTutorial
docker build . -t proto-cluster-tutorial:1.0.0
// Navigate to root directory
docker build -f ./SmartBulbSimulatorApp/Dockerfile . -t smart-bulb-simulator:1.0.0
// Push Docker Image to Docker Hub
docker tag proto-cluster-tutorial:1.0.0 hulkstance/proto-cluster-tutorial:1.0.0
docker push hulkstance/proto-cluster-tutorial:1.0.0
docker tag smart-bulb-simulator:1.0.0 hulkstance/smart-bulb-simulator:1.0.0
docker push hulkstance/smart-bulb-simulator:1.0.0
// List Docker Images
docker images
// Deployment to Kubernetes cluster
helm install proto-cluster-tutorial chart-tutorial
helm install simulator chart-tutorial --values .\simulator-values.yaml
// It might fail with the following message:
// Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "https://host.docker.internal:64285/version": dial tcp 172.16.1.131:64285: connectex: No connection could be made because the target machine actively refused it.
// which means we don't have a running Kubernetes cluster. We need to create one:
k3d cluster create two-node-cluster --agents 2
// If we want to switch between clusters:
kubectl config use-context k3d-two-node-cluster
// Confirm everything is okay
kubectl get pods
kubectl logs proto-cluster-tutorial-78b5db564c-jrz26
You can use kubectl port-forward command.
Syntax:
kubectl port-forward TYPE/NAME [options] LOCAL_PORT:REMOTE_PORT
In your case:
kubectl port-forward pod/proto-cluster-tutorial-78b5db564c-jrz26 8181:PORT_OF_POD
Now, you can access localhost:8181 to use.
I suggest you follow the official docs of k3d for exposing services.
Use either ingress or nodeport methods.
This totally depends on your use case. If you are testing from you local machine you might need to use portfowrd command kubectl port-forward <pod-name> <localport>:<remoteport>
If you are using Minikube then use minikube service <service-name> --url
If you are using a Cloud provider like AKS,GKE OR EKS then you might need to think of using some other way of accessing application this could be done by using NodePort,LoadBalancer or Ingress.
If you use service type of Nodeport the same could be achieved. But in the case of Nodeport service all the port ranges which it supports is 30000-32767 and this is a very hard number to memorise the port number.Another disadvantage of Nodeport service is that Node's IP address(as it changes if node restart) hence this is not used for project purposes.
Create a NodePort service : Node port service
LoadBalancer service exposes an external IP and then you can access the service using : but if you have 100 services you will be charged for 100 external IPs and this hampers the budget.
Create a load balancer service on Kubernetes : Load Balancer service
Another way of exposing an application is using ingress-controller to achieve the same thing.Using ingress you can expose 100 applications with one external IP.You will need to install the ingress controller and then using an ingress file to configure the rules.
Setup ingress controller on Kubernetes : Ingress controller
I'm using ASP.NET Core data protection with default DI behavior. That works fine when my ASP.NET application is hosted on IIS. Now I have an application that needs to run as a service. So I'm using Microsoft.Extensions.Hosting.WindowsServices to do the windows service part with our standard
Host.CreateDefaultBuilder(args)
.UseWindowsService()
The BackgroundService then hosts ASP.NET Core with your standard
var builder = Host.CreateDefaultBuilder()
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddJsonFile("secrets.json", optional: true, reloadOnChange: true);
}
.ConfigureWebHostDefaults(....)
inside the background service, I can then resolve an instance of IDataProtectionProvider, create a protector, and use it to unprotect my secrets
var dataProtectionProvider = Container.Resolve<Microsoft.AspNetCore.DataProtection.IDataProtectionProvider>();
var protector = dataProtectionProvider.CreateProtector(appName);
var decryptedSecret = protector.Unprocect(some secret)
Now that all works fine as long as I run my application from the CLI. But running it as a service (same file, same location, and of course under the same account), I get an 'invalid payload' exception when I call Unprotect.
I know same path and same account is important, so that's taken care of. I also know that the application can find secrets.json as I wrote some probing code that checks if the file is present and can be read before I even try to unprotect. I'm even checking if the string I'm trying to unprotect is null/empty (which it isn't).
I finally installed a debug build as a service and attached the debugger, and when I look at IDataProtectionProvider, it has a Purpose.. and when running as a service, that's c:\windows\system32. When my app runs from the CLI, it's the path to the exe. So, is there a way to specify the purpose on my own so things behave the same regardless of CLI/Service?
So how can I control the purpose?
So having noted the difference in purpose of the IDataProtectionProvider, I was well on my way of solving this. The solution was to set a static purpose as explained here
I am currently writing a microservice in .NET Standard that writes a Dockerfile, builds the Docker image associated with that Dockerfile (which implicitly causes a pull image), and pushes the image to a private Docker registry. Those Docker operations are all performed using the Docker.Dotnet library that MSFT maintains. I believe that this is mostly just a wrapper around calls to the Docker Remote API. The execution context of this microservice is in a K8s cluster hosted on AWS, or internally on bare-metal, depending on the deployment.
Previously our Docker registry has just been a private registry hosted on Artifactory internally, but we are migrating to a private DockerHub registry/repository/. With this migration have come some authentication problems.
We authenticate all of the pull and push operations with an AuthConfig that consists of the username and password for the account associated with the registry. The AuthConfig is either added to a Parameters object and then passed to the call:
imageBuildParameters.AuthConfigs = new Dictionary<string,
AuthConfig>() { { DockerEnvVariables.DockerRegistry, authConfig } };
…
using (var responseStream = _dockerClient.Images.BuildImageFromDockerfileAsync(tarball, imageBuildParameters).GetAwaiter().GetResult())
Or it’s (strangely, to me) both passed in a parameter and separately to the call:
ImagePushParameters imagePushParameters = new ImagePushParameters() { ImageID = image.Id, RegistryAuth = authConfig, Tag = "latest" };
_dockerClient.Images.PushImageAsync(RepoImage(image.Id), imagePushParameters, authConfig, this).GetAwaiter().GetResult();
We are currently getting auth errors for any of the various Docker registries I’ve tried for DockerHub, as such (where I’ve redacted the organzation/namespace and image:
{“message”:“pull access denied for registry-1.docker.io//, repository does not exist or may require ‘docker login’: denied: requested access to the resource is denied”},“error”:“pull access denied for registry-1.docker.io//, repository does not exist or may require ‘docker login’: denied: requested access to the resource is denied”
The list of DockerHub urls that I’ve tried is following, all with either the error above or a different “Invalid reference format” error:
Hub.docker.io
Hub.docker.io/v1/
Docker.io
Docker.io/v1/
Index.docker.io
Index.docker.io/v1/
Registry.hub.docker.com
registry-1.docker.io
Strangely enough, if I run it locally on my Windows system, the bolded URL’s actually work. However, they all fail when deployed to the cluster (different method of interacting with the Docker socket/npipe?).
Does anyone know the correct URL I need to set to properly authenticate and interact with DockerHub? Or if my current implementation and usage of the Docker.Dotnet library is incorrect in some way? Again, it works just fine with a private Docker registry on Artifactory, and also with DockerHub when run on my local Windows system. Any help would be greatly appreciated. I can provide any more information that is necessary, be it code or configuration.
If I have a public url such as https://dogsandcats.com and I'd like to be able to connect to it from a deployed docker container to that server, how can I go about doing so?
Currently, I have the following Dockerfile for my .Net Core project:
FROM microsoft/dotnet:sdk AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM microsoft/dotnet:aspnetcore-runtime
WORKDIR /app
COPY --from=build-env /app/out .
EXPOSE 8080
ENTRYPOINT ["dotnet", "MedicineInventoryManagement.dll"]
I am exposing port 8080 and when I visit localhost:8080 on my local environment, I can see my webserver distributing the application. I have mapped port 8080 on https://dogsandcats.com but it doesn't run as it does on my local computer.
Do I have to do something with the following code block in Program.cs?
public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>();
}
}
Or does the issue lie elsewhere?
The microsoft/dotnet:aspnetcore-runtime image sets the ASPNETCORE_URLS environment variable to http://+:80 which means that if you have not explicity set a URL in your application, via app.UseUrl in your Program.cs for example, then your application will be listening on port 80 inside the container.
https://hub.docker.com/r/microsoft/aspnetcore/
I would debug issues like these using a bottom up approach.
Do docker ps to check what ports does your container expose to the host and what ports does it bind to? Have a look under the ports column. You can also use the following command and provide your container name:
docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' $INSTANCE_ID
If a port is exposed as intended, there are a few things to check on the host. Now depending on how your environment is setup, the steps can differ but you'll see what I mean.
Start by looking into the firewall rules. For example, if you are on Ubuntu and have UFW enabled, check if the rules are configured to allow access to that Port. If you are behind a home/office router, you might want to open ports to let the public traffic come that port. If you are running a server like Apache/Nginx, I would check if my virtual hosts/server blocks are configured properly.
If you find that your ports are opened as expected, try accessing it using the public ip of your network. There is no point looking into the DNS records if we can't access it using the public IP of the machine. After all, that's what the DNS are pointing to. If you are unable to access I would go back to Step #2.
If Step#3 is all clear, then I would look into the DNS records and try to identify the issue.
To understand this process, here is a very high level overview of it. Try a bottom up approach so you can exactly pin point the issue you are stuck with.
[DNS] => [Public IP of your network] => [Hardware Firewall like router etc.] => [Nginx/Apache web server etc. if any] => [Machine's firewall like OS or third party] => Docker Host Port => Dotnet Core Port in the Container.
You could also using something like ngrok if you just want to test something quick for development purposes. That takes away the pain of Step#2 and Step#3 by setting up a proxy for you (specially when you are behind a strict firewall you don't control). I'm not associated to ngrok in any way. There are a few other alternatives to it as well.
I am creating a simple Windows service that hosts a Nancy instance to provide views of its internal data. Everything works as expected when using a browser on the local machine; I see the view that it serves up. However, I cannot find any reason why it will not access from a remote browser (on the same network). Access from a remote browser simply delays a while; IE will eventually display "This page can’t be displayed;" Safari on an iPad shows the partial progress bar for a while and does nothing.
I'm binding using all local IPs, not just localhost.
I am using the GetUriParams() function at this link to discover all local IP addresses for binding. http://www.codeproject.com/Articles/694907/Embed-a-web-server-in-a-windows-service
_nancyHost = new NancyHost(GetUriParams(port));
_nancyHost.Start();
I discovered at this page that binding to localhost works for local access only. http://forums.asp.net/t/1881253.aspx?More+SelfHost+Documentation
The IPs that this function discovers are for Ethernet adapter, Wireless adapter, and two VMware Network adapters from a prior installation of a VMware player. I've tried the remote access both by machine name and by literal IP to the Ethernet adapter.
I added entries to urlacl list.
I have used the netsh http add urlacl command as recommended in many places, including at this link: Remote access to a Nancy Self Host
If I perform netsh http show urlacl, I see the entry for the port I'm using.
I tried different Nancy configs
If I set the Nancy configuration option for UrlReservations.CreateAutomatically, I will get security prompts, which after allowing, I see new entries in netsh http show urlacl list output for all of the local IPs, but it still does not allow remote access. I also tried the RewriteLocalHost option true and false.
I've tried starting Nancy with http://+:3684 or http://*:3684 (which gets parsing exception from Uri()) and with http://0.0.0.0:3684 (which gets exception from AddAllPrefixes() within HttpListener()).
I added the EXE to Windows firewall
I have created firewall exceptions as described here: https://msdn.microsoft.com/en-us/library/ms733768.aspx
The associated rule shows Private,Public and "Any" for every column with both TCP and UDP.
I tried running Nancy in different environments. I've run the code in: the Windows Service running as Local System, a console app within Visual Studio 2013 debugger, and the console app Run As Administrator.
I imagine it's a simple security setting, but I've googled and searched and tried various things for a couple of days now.
What am I missing?
This answer provided the clue I needed.
https://stackoverflow.com/a/21364604/1139376
This is because HttpListener is built on top of http.sys which will listen on the port you specified on behalf of your program.
It wasn't my EXE doing the actual listening. All I needed to do was to add an Incoming rule to the Windows Firewall set for the "System" program and the specific TCP port I'm using. That allowed remote access.
Use the HostConfiguration and let Nancy make the URL reservations automaticaly for you.
var config = new HostConfiguration
{
RewriteLocalhost = true,
UrlReservations = new UrlReservations { CreateAutomatically = true }
};
host = new NancyHost(new Uri("http://localhost:8080"), new DefaultNancyBootstrapper(), config);
host.Start();
Note that this will force ACL to create network rules for new ports if they do not already exist.