Kubernetes Cluster Installation

Kubernetes Cluster Installation

1. To get started, log in to the dashboard, find the Kubernetes Cluster in the Marketplace, and click Install. Note that this clustered solution is available only for billing customers.

2. Сhoose the type of installation:

  • Clean Cluster with pre-deployed Hello World example

  • Deploy custom helm or stack via shell commands. Type a list of commands to execute the helm chart or other commands for a custom application deployment.

 

By default, here you are offered to install the Open liberty operator with a set of commands:

OPERATOR_NAMESPACE=open-liberty

kubectl create namespace "$OPERATOR_NAMESPACE"

kubectl apply -f https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/master/deploy/releases/0.7.0/openliberty-app-crd.yaml

curl -L https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/master/deploy/releases/0.7.0/openliberty-app-cluster-rbac.yaml | sed -e "s/OPEN_LIBERTY_OPERATOR_NAMESPACE/${OPERATOR_NAMESPACE}/"  | kubectl apply -f -

curl -L https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/master/deploy/releases/0.7.0/openliberty-app-operator.yaml  | sed -e "s/OPEN_LIBERTY_WATCH_NAMESPACE/${OPERATOR_NAMESPACE}/"  | kubectl apply -n ${OPERATOR_NAMESPACE} -f -

kubectl apply -f https://raw.githubusercontent.com/cloudjiffy-jps/kubernetes/v1.18.10/addons/open-liberty.yaml

3. As a next step, choose the required topology of the cluster. Two options are available:

  • Development: one master (1) and one scalable worker (1+) - lightweight version for testing and development purposes
  • Production: multi-master (3) with API balancers (2+) and scalable workers (2+) - cluster with pre-configured high availability for running applications in production

Where:

    • Multi-master (3) - three master nodes.
    • API balancers (2+) - two or more load balancers for distributing incoming API requests. In order to increase the number of balancers, scale them horizontally.
    • Scalable workers (2+) - two or more workers (Kubernetes Nodes). In order to increase the number of workers, scale them out horizontally.

 

4. Attach dedicated NFS Storage with dynamic volume provisioning. 

By default, every node has its own filesystem with read-write permissions but for access from other containers or persisting after redeployments, the data should be placed to a dedicated volume. 

You can use a custom dynamic volume provisioner by specifying the required settings in your deployment Yaml files. 

Or, you can keep the already pre-configured volume manager and NFS Storage built-in to the Cloudjiffy Kubernetes cluster. As a result, the physical volumes are going to be provisioned dynamically on demand and connected to the containers. Storage Node can be accessed and managed using file manager via the dashboard, SFTP, or any NFS client. 

 

5. If necessary you can install auxiliary software to monitor and troubleshoot the K8s cluster, and enable API access with the help of complementary tools checkboxes:

  • Install Prometheus & Grafana to monitor the K8s cluster and the application's health. This software requires an additional 5GB of disk space for persistent volumes and consumes about 500 MB of RAM
  • Install Jaeger tracing tools to ensure effective troubleshooting for distributed services
  • Enable Remote API Access to provide an ability to manage K8s via API

If you decide not to install them now you can do it later on with a specific Cluster Configuration add-on.

6. In order to highlight all package features and peculiarities, we initiate the installation of the Open Liberty application server runtime in the Production Kubernetes cluster topology with built-in NFS Storage

Click the Install button and wait a few minutes. Once the installation process is completed the cluster topology looks as follows:

7. You can access the Kubernetes administration dashboard along with the Open Liberty application server welcome page from the successful installation window.

  • use Access Token and follow the Kubernetes dashboard link to manage the Kubernetes cluster 

k8s cluster deployed

access token

k8s dashboard

  • access Open Liberty welcome page by pressing Open in Browser button

access open liberty

 

Cloudjiffy Kubernetes Distribution Add-Ons

Cloudjiffy K8s package comes with specific add-ons available at the master node.

kubernetes addons

  • Install SSL Certificate Manager:
    Before proceeding to the installation attach a public IP to the one of worker nodes. Then create an A record for your external domain for example myservice.jele.website using generated public IP. After that put this domain name in the Certificate Manager user interface and press Apply.

    kubernetes domain

  • Along with managing SSL certificates, it deploys a separate ingress controller to balance a workload between applications that will be bound to workers' public IPs. In this case, all internal resources become accessible via worker nodes hostnames like node${nodeId}-${envName}.${platformDomain} except for the worker node whose public IP address was used to bind the external domain(for example myservice.jele.website) with help of Certificate manager.

  • enable/disable GitLab server integration within Cloudjiffy PaaS
  • automatically upgrade Kubernetes cluster
  • switch on a remote API access (see the section below for more information), if it wasn’t enabled during installation

    kubernetes API

  • install and configure monitoring tools Prometheus and Grafana, if they weren’t enabled during installation.

    kubernetes monitoringThe respective email is sent and an informational popup with access credentials appears:

     

  • Install the tracing tool Jaeger.

    kubernetes troubleshoting


    The respective email is sent and an informational popup with access credentials appears:

     

Remote API Access to Kubernetes Cluster 

In order to access and manage the created Kubernetes cluster remotely using API, tick the Enable Remote API Access checkbox.

 

The Remote API Endpoint link and access Token should be used to access the Kubernetes API server (Balancer or Master node). 

api endpoint

The best way to interact with the API-server is using the Kubernetes command line tool kubectl:

  • Install the Kubectl utility on your local computer following the official guide. For this article, we have used installation for Ubuntu Linux.
  • Then create a local configuration for kubectl. To do this open terminal on your local computer and issue the following commands:
$ kubectl config set-cluster mycluster --server={API_URL}
$ kubectl config set-context mycluster --cluster=mycluster
$ kubectl config set-credentials user --token={TOKEN}
$ kubectl config set-context mycluster --user=user
$ kubectl config use-context mycluster

Where:

{API_URL} - Remote API Endpoint link

{TOKEN} -  Access Token

Now you can manage your Kubernetes cluster from your local computer.

As an example, let’s take a look at the list of all available nodes in our cluster. Open the local terminal and issue a command using kubectl:

user@cloudjiffy:~$ kubectl get nodes

In order to disable/enable API service after installation use the Master node Configuration Add-On.

k8s cluster configuration

remote api access

Cluster Upgrade

To keep your Kubernetes cluster software up-to-date use the Cluster Upgrade Add-On. Just click on the Start Cluster Upgrade button. Addon checks whether a new version is available or not and if so the new version will be installed. During the upgrade procedure, all the nodes including masters and workers will be redeployed to the new version one by one, and all the existing data and settings will remain untouched. Keep in mind that the upgrade procedure is sequential between versions so if you perform an upgrade to the latest version from the version far away behind the latest one you will have to run the upgrade procedure multiple times. The upgrade becomes available only if a new version becomes available and was globally published by the Cloudjiffy team.

k8s cluster upgrade

In order to avoid downtime of your applications during the redeployment please consider using of multiple replicas for your services.  

 


Was this article helpful?

mood_bad Dislike 0
mood Like 1
visibility Views: 2217