Wednesday 29 August 2018

Datastax Containers on kubernetes

In this post i will help you to deploy datastax images on kubernetes and we can create a 2 node instance of datastax cluster.

We will deploy containers of   DSE-Search, DSE-Graph, DSE-Database, DSE-Analytics
In this step we will create a DSE Search enabled container and deploy that on node machines.

Commands used
The below yaml file help you to create a dse service for dse-search
#kubectl apply -f dse-search-lb-services.yaml

This will help us to create a volume
#kubectl apply -f dse-search-volume.yaml

This will help us to create a dse-search statefulset.
       #kubectl apply -f dse-search-statefulset.yaml


in the github repository you can see the other yaml files also. https://github.com/vishnuc95/Scripts-Kubernetes-Yaml- 

That we used to configure the DSE-Graph, DSE-Database, DSE-Analytics containers.

As mentioned above apply each yaml file for creating service,volume and statefulset for DSE containers.

#kubectl apply -f dse-graph-lb-services.yaml
#kubectl apply -f dse-graph-volume.yaml
      #kubectl apply -f dse-graph-statefulset.yaml
      #kubectl apply -f dse-database-lb-service.yaml
      #kubectl apply -f dse-database-volume.yaml
      #kubectl apply -f dse-database-statefulset.yaml
      #kubectl apply -f dse-analytics-lb-services.yaml
      #kubectl apply -f dse-analytics-volume.yaml
      #kubectl apply -f dse-analytics-statefulset.yaml




















Finally you can see 2 instance of all the containers. Now we are good with the DSE Containers.

Now we can connect to one of the service through the load balanced ip from the studio.

Log in to the studio and create a connection to the database through the load balanced IP
Please refer my previous blog post for creating studio service :http://biexplored.blogspot.com/2018/08/accessing-kubernetes-pods-from-outside.html

In the below screen shot you can see we have successfully created a connection to the database through the container.







Accessing Kubernetes pods from outside the cluster and Configuring Datastax DSE Studio

To access a kubernetes service from outside world we can use two methods

1) Through Node port
2) Through Load balancer.

If we are going to use Nodeport we can access the application through a port that we specify or kubernetes system will assign with the master ip.

If we are going to use Load balncer we can access the application through the external ip of the load balncer with the tcp port.

let us look into some example here for accessing the application from outside the cluster.

In this example i am usig MetalLB load balncer using layer 2 mode.

Step 1:   Install MetalLB on the cluster.

For this purpose we can use the metalib-kubectl.yaml that you can download from the github.

metalib-kubectl.yaml: https://github.com/vishnuc95/Scripts-Kubernetes-Yaml-/blob/master/metalib-kubectl.yaml

# kubectl apply -f metalib-kubectl.yaml














After installing you can see metallb system created one controller for the master node and 2 speakers for the slave nodes.









·      Step 2:  Configure MetalLB to announce using layer 2 mode and give it some IP addresses to manage
      
      We can have a look into the configuration file
















Here you can see we have specified a range of private ip from 10.30.1.15-10.30.1.19 so metallb will use this range of ip while assigning the ip address. Please use a ip that available to you for configuring.

Note: Please select  a range of private ip after discussing with your network team for making sure that there will not be any ip conflict in future. 

Now we can apply this changes into the system.
# kubectl apply -f metalib-layer2-config.yaml

metalib-layer2-config.yaml: https://github.com/vishnuc95/Scripts-Kubernetes-Yaml-/blob/master/metalib-layer2-config.yaml

You can see the logs using the command #kubectl logs -l component=speaker -n metallb-system







Step 3: Create a load-balanced service, and observe how MetalLB sets it up.

Now we are good with the metallb-system and we can proceed with deploying an application in the cluster. 

We are going to deploy DSE studio as a load balanced service.

#kubectl apply -f dse-studio-deployment.yaml
dse-studio-deployment.yaml : https://github.com/vishnuc95/Scripts-Kubernetes-Yaml-/blob/master/dse-studio-deployment.yaml

Change the values of

nodePort:
loadBalancerIP:

Based on the ip addess that you provided when creating layer 2 configuration.






We can see a new service stated as a load balancer type and it assigned with an ip address and port.







You will be able to access the machine using external ip and tcp port 9091














Try to access the Studio using the master_machine_ip:30455 from outside of your machine(We are accessing this through node port).

In our case the url is http://<Master_IP>:30455

So you can see Datastax Studio is up and running.




Kubernetes Dashboard setup

In my previous post i had explained about creating a three node master slave kubernetes cluster.
http://biexplored.blogspot.com/2018/08/kubernetes-cluster-configuration-and.html

This session will help to you deploy an kubernetes dashboard on the cluster and that will help us to monitor different applications hosted on the kubernetes cluster.

Step1: Using the link download and install kubernetes dashboard.




After installing the dashboard you can see a new service kubernetes-dashboard is started. it deployed required pods in the cluster.

The below command will get you the services running on the cluster
#kubectl get svc

For listing the pods
#kubectl get pods

After that describe the dashboard service and that will give you the details of the service.
#kubectl describe svc kubernetes-dashboard -n kube-system












As shown in the picture you can see the service is running in the nodeport 31000

try to access the dashboard through : http://<Master-IP>:3100 from the browser, and you can see the nice window for dashboard in the browser page.




Tuesday 28 August 2018

Kubernetes cluster configuration and set up.

This post will explain about configuring a three node kubernetes cluster in CentOS machine.

Before you begin i will give you a basic idea about what we are going to achieve here.

Kubernetes is an open source container management system that will help us to deploy containers and scale them according to our requirement.If you need to create a 2 node cluster with some applications we can configure that with the help of kubernetes. Kubernetes master will take care of all the resource allocation and maintenance of the containers deployed in the cluster.

In this tutorial i am setting up a two node cluster using  three CentOS machines. Please not you need to have root access to complete the steps described and you need to run till step 8 in all the nodes.

Step 1: Identify the ip address and Host name of the machines that we are going to configure

10.1.X.X -- Machine host name 1 -- > Kubernetes Master
10.1.X.X -- Machine host name 2 -- > Kubernetes Slave Node 1
10.1.X.X -- Machine host name 3 -- > Kubernetes Slave Node 2

Step 2: Edit /etc/hosts file in all the three machines and add the three ip address

Eg: 
10.1.x.x Host_name_1
10.1.x.x Host_name_2
10.1.x.x Host_name_3


Step 3: Do a yum update
#sudo yum update

Step 4: Do some other configurations in the machine using the below commands

sudo setenforce 0
sudo sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g'      /etc/sysconfig/selinux
sudo modprobe br_netfilter
sudo echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
sudo swapoff –a
sudo vim /etc/fstab

in fstab check are you able to see the UUID

then exit from the editor




Step 5: Install Docker

sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum install -y docker-ce

Step 6:  Install Kubernetes

Add the kubernetes repository to the centos system by running the following command.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
EOF

Now install the kubernetes packages kubeadm, kubelet, and kubectl using the yum command below.
sudo yum install -y kubelet kubeadm kubectl

Step 7: Reboot the machine 
#sudo reboot

Step 8: Start Docker and Kubernetes
systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet
We need to make sure the docker-ce and kubernetes are using same 'cgroup'.
Check docker cgroup using the docker info command.
docker info | grep -i cgroup
And you see the docker is using 'cgroupfs' as a cgroup-driver.
Now run the command below to change the kuberetes cgroup-driver to 'cgroupfs'.

sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Reload the systemd system and restart the kubelet service.
systemctl daemon-reload
systemctl restart kubelet
Now we're ready to configure the Kubernetes Cluster.


Step 9: Kubernetes cluster initialization.

In this step, we will initialize the kubernetes master cluster configuration.
Move the shell to the master server 'k8s-master' and run the command below to set up the kubernetes master.
kubeadm init --apiserver-advertise-address=10.1.X.X --pod-network-cidr=10.244.0.0/16


Note:
--apiserver-advertise-address = determines which IP address Kubernetes should advertise its API server on(Master server).
--pod-network-cidr = specify the range of IP addresses for the pod network. We're using the 'flannel' virtual network. If you want to use another pod network such as weave-net or calico, change the range IP address.When the Kubernetes initialization is complete, you will get the result as below.



As shown in the command window you can use the kubeadm join command to join nodes to the cluster. Save this join token number for future use. If you need to add one more node to the existing cluster you can run this command on the salve node and join that node with the master.

Step 10: 

Now in order to use Kubernetes, we need to run some commands as on the result.
Create new '.kube' configuration directory and copy the configuration 'admin.conf'.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Next, deploy the flannel network to the kubernetes cluster using the kubectl command.
The flannel network has been deployed to the Kubernetes cluster.
Wait for a minute and then check kubernetes node and pods using commands below.

Step 11:

Run the below commands to see the status of the cluster with nodes and the running pods in the container.

kubectl get nodes
kubectl get pods --all-namespaces
And you will get the 'k8s-master' node is running as a 'master' cluster with status 'ready', and you will get all pods that are needed for the cluster, including the 'kube-flannel-ds' for network pod configuration.
Make sure all kube-system pods status is 'running'.
Kubernetes cluster master initialization and configuration has been completed.

Step 12: Adding nodes to the existing cluster.

You can use the join command that you received when initializing the cluster to add nodes to the cluster. open shell script in each slave nodes and run the join command so as shown below you can see the node are joined to your cluster.




Wait for some minutes, and back to the master cluster server check the nodes using the following command.

sudo kubectl get nodes


Now you can see the cluster is ready with one master and two slaves. From the status bar we can see the status as Ready.