Using the Datastax C# Driver I'm trying to connect to Cassandra which was deployed to Azure Kubernetes Services using a Bitnami helm chart.
var cluster = Cluster.Builder()
.AddContactPoint("127.0.0.0") // example IP
.WithCredentials("cassie", "some-pass")
.Build();
When trying this locally I use kubectl port-forward, but when I'm uploading my service to Kubernetes I want to use service name. Many applications shown to me by colleagues use just that. When I add the link that helm chart gives to me after I install it
Cassandra can be accessed through the following URLs from within the cluster:
CQL: service-name.some-namespace.svc.cluster.local:9042
I'm unable to connect, I'm getting a Cassandra.NoHostAvailableException and I have to use an IP.
How to solve this problem. The IP changes every time I redeploy.
How can I use the name instead of the IP?
Apparently, providing the DNS with format "DNS:PORT" as a parameter for AddContactPoint was not working. Using the method WithPort did the trick.
var cluster = Cluster.Builder()
.WithPort(PORT)
.AddContactPoint("dns")
.WithCredentials("cassie", "some-pass")
.Build();
Related
I have 1 node test Cassandra claster (IP 192.168.108.198). In Cassandra.yaml listen_address: localhost.
I can connect to it via DBeaver (using ssh port forwarding ), and work in it.
I wrote .net app for using it, but it cant connect to cassandra: "All hosts tried for query failed (tried 192.168.108.198:9042: SocketException". It happens on the last row, as below. Is it problem with cassandra config or my app?
_connectionConfig = connectionConfig;
_socketOptions = new SocketOptions();
_socketOptions.SetReadTimeoutMillis(90000);
_cluster = Cluster.Builder()
.WithPort(_connectionConfig.ConnectionPort)
.AddContactPoints(_connectionConfig.ConnectionStrings)
.WithSocketOptions(_socketOptions)
.WithQueryOptions(new QueryOptions().SetConsistencyLevel(ConsistencyLevel.One))
.WithAuthProvider(new PlainTextAuthProvider(_connectionConfig.UserName, _connectionConfig.UserPassword))
.Build();
_session = _cluster.Connect(_connectionConfig.KeySpaceName);
So, i corrected some settings in Cassandra.yaml, and now connection is going fine, appender other problems, but right settings are:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
# seeds is actually a comma-delimited list of addresses.
# Ex: "<ip1>,<ip2>,<ip3>"
- seeds: "192.168.108.198"
listen_address: 192.168.108.198
rpc_address: 192.168.108.198
192.168.108.198 - Cassandra IP.
I'm deploying this project (GitHub) locally on k3d Kubernetes Cluster. It includes a Helm chart. There is also a documentation for this example which can be found here.
What I have done so far is what's below. It works just fine. The problem is the ClusterIPs it gives are internal for k8s and I can't access them outside of the cluster. What I want is to be able to run them on my machine's browser. I was told that I need a nodeport or a loadbalancer to do that. How can I do that?
// Build Docker Images
// Navigate to root directory -> ./ProtoClusterTutorial
docker build . -t proto-cluster-tutorial:1.0.0
// Navigate to root directory
docker build -f ./SmartBulbSimulatorApp/Dockerfile . -t smart-bulb-simulator:1.0.0
// Push Docker Image to Docker Hub
docker tag proto-cluster-tutorial:1.0.0 hulkstance/proto-cluster-tutorial:1.0.0
docker push hulkstance/proto-cluster-tutorial:1.0.0
docker tag smart-bulb-simulator:1.0.0 hulkstance/smart-bulb-simulator:1.0.0
docker push hulkstance/smart-bulb-simulator:1.0.0
// List Docker Images
docker images
// Deployment to Kubernetes cluster
helm install proto-cluster-tutorial chart-tutorial
helm install simulator chart-tutorial --values .\simulator-values.yaml
// It might fail with the following message:
// Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "https://host.docker.internal:64285/version": dial tcp 172.16.1.131:64285: connectex: No connection could be made because the target machine actively refused it.
// which means we don't have a running Kubernetes cluster. We need to create one:
k3d cluster create two-node-cluster --agents 2
// If we want to switch between clusters:
kubectl config use-context k3d-two-node-cluster
// Confirm everything is okay
kubectl get pods
kubectl logs proto-cluster-tutorial-78b5db564c-jrz26
You can use kubectl port-forward command.
Syntax:
kubectl port-forward TYPE/NAME [options] LOCAL_PORT:REMOTE_PORT
In your case:
kubectl port-forward pod/proto-cluster-tutorial-78b5db564c-jrz26 8181:PORT_OF_POD
Now, you can access localhost:8181 to use.
I suggest you follow the official docs of k3d for exposing services.
Use either ingress or nodeport methods.
This totally depends on your use case. If you are testing from you local machine you might need to use portfowrd command kubectl port-forward <pod-name> <localport>:<remoteport>
If you are using Minikube then use minikube service <service-name> --url
If you are using a Cloud provider like AKS,GKE OR EKS then you might need to think of using some other way of accessing application this could be done by using NodePort,LoadBalancer or Ingress.
If you use service type of Nodeport the same could be achieved. But in the case of Nodeport service all the port ranges which it supports is 30000-32767 and this is a very hard number to memorise the port number.Another disadvantage of Nodeport service is that Node's IP address(as it changes if node restart) hence this is not used for project purposes.
Create a NodePort service : Node port service
LoadBalancer service exposes an external IP and then you can access the service using : but if you have 100 services you will be charged for 100 external IPs and this hampers the budget.
Create a load balancer service on Kubernetes : Load Balancer service
Another way of exposing an application is using ingress-controller to achieve the same thing.Using ingress you can expose 100 applications with one external IP.You will need to install the ingress controller and then using an ingress file to configure the rules.
Setup ingress controller on Kubernetes : Ingress controller
I followed "public access" to set up the configuration. I have two goals, Firstly, I want to create topic from local terminal by using this command line "/bin/kafka-topics.sh --create --bootstrap-server ZookeeperConnectString --replication-factor 3 --partitions 1 --topic ExampleTopicName", but it always return "the broker is not available". Secondly, I want to connect MKS from local .Net Application. However, it seams cannot connect to the MKS successfully.
This is my some configuration that attach on my MKS
Create public subnet 172.31.0.0/20 and 172.31.16.0/20 and attach an Internet Gateway
Close unauthenticated access control off and turn on SASL/SCRAM access-control methods. Besides, I attached an secret for this authentication and add allow.everyone.if.no.acl.found to false to cluster's configuration.
Turn on public access
Cluster configuration
Cluster configuration
Producer Configuration
Producer Configuration
Security Group
Security Group
Does anyone can give me some advice or hints? I do some research that not sure I have to add listeners in my cluster configuration? Thanks for your time and consideration.
I was struggling with MSK, too. I finally got it working and maybe give some hints here:
according to the docs at AWS, only SCRAM-SHA-512 is supported, not -256
in the SG, I did add a rule for inbound traffic to accept from anywhere (0.0.0.0)
Hope that helps,
donbachi
I'm trying to connect to an on-prem Service Fabric cluster from C# code to manage some services:
using System.Fabric;
...
var fabricClient = new FabricClient();
var services = await fabricClient.QueryManager.GetServiceListAsync(new Uri("fabric:/TestConsumer"));
var service = services.FirstOrDefault(e => e.ServiceName.AbsolutePath.Contains("TestManagedConsumer"));
..
(I found the above example code here.)
The problem is that I don't actually know how to connect to the cluster. The above code throws this exception:
System.Fabric.FabricElementNotFoundException: 'Application not found'
Where/how do I specify where my cluster is running? Furthermore do I need some method authentication? If I simply navigate to http://host:19080 I'm able to connect without logging in.
I'm pretty new to Service Fabric, but I've done some digging and I am not turning up much. There seems to be little to no example code out there for this type of thing. Any suggestions?
I feel pretty dumb having found what I was looking for about 5 minutes after posting this question. Doing a search for "new FabricClient" in google turned up some examples, including this page: https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric/service-fabric-connect-to-secure-cluster.md, which shows the following example:
To connect to a remote unsecured cluster, create a FabricClient instance and provide the cluster address:
FabricClient fabricClient = new FabricClient("clustername.westus.cloudapp.azure.com:19000");
I was able to connect to my cluster with this code.
There is also some good example code here: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-deploy-remove-applications-fabricclient
After deploying a azuze container service and using swarm, how do one connect using the example given:
var credentials = new CertificateCredentials (new X509Certificate2 ("CertFile", "Password"));
var config = new DockerClientConfiguration("http://ubuntu-docker.cloudapp.net:4243", credentials);
DockerClient client = config.CreateClient();
I have made the certificate and just cant figure out wht the proper endpoint to use is?
the url from azure portl: <name>-mgmt.<region>.cloudapp.azure.com
ACS does not use certs by default. We use SSH tunneling as documented at https://learn.microsoft.com/en-us/azure/container-service/container-service-connect
If you have connected to the masters and manually configured it to use certs as well as correctly installing those certs o the masters then there is nothing magical about the endpoints and connection details. It's just Docker, so follow the appropriate Docker documentation. The correct URL is, as you note in your question <name>-mgmt.<region>.cloudapp.azure.com.
However, you should be aware that since we do not use certs by default we do not open the necessary ports on the Master LB. You will also need to open those on your master LB. For an example (which is against the agent LB but the processes is the same) see https://learn.microsoft.com/en-us/azure/container-service/container-service-enable-public-access