On Premises Installation

This section describes installing the Kloudspot LISA platform (including KloudManage device management and KloudInsights analytics) in an on-premises configuration.

We support both highly available and single node configurations.

Installation on a single Ubuntu node using MicroK8S
Installation on a MicroK8S multi-node cluster
Working with limited or no public internet access

Subsections of On Premises Installation

Single Node

Introduction

This guide explains the installation process for the Kloudspot software stack on a single node, whether it’s a virtual machine or a bare-metal server running Ubuntu 22.04.

Components

The Kloudspot software stack consists of the following components, which can be selectively installed:

  • KloudHybrid
  • KloudInsights
  • KloudManage

You can choose which components to install by modifying the values in the YAML file used for the Helm chart installation.

System Requirements

To ensure smooth operation, it is recommended to meet the following specifications based on your desired configuration:

Minimum Specification

  • KloudHybrid (<3000 users) or KloudManage
  • 4 cores
  • 16 GB RAM
  • 150 GB SSD/Disk (configured with LVM, with 50 GB assigned to /)

Medium Specification

  • KloudHybrid (>3000 users) or KloudInsights
  • 8 cores
  • 32 GB RAM
  • 300 GB SSD/Disk (configured with LVM, with 50 GB assigned to /)

Full System Specification

For the complete software stack, it is recommended to have:

  • 16 cores
  • 64 GB RAM
  • 1 TB SSD (configured with LVM, with 100 GB assigned to /)

System Configuration

Follow these steps to configure your system before installing Kloudspot:

  1. Install the Ubuntu 22.04 Server image.

  2. Update the system’s libraries using the following commands:

    sudo apt-get update
    sudo apt-get -y upgrade
    

    Reboot your system after the upgrade.

  3. Install the Kloudspot tools by running the following command:

    curl -s https://registry.kloudspot.com/repository/files/on-prem.sh | sudo bash
    

    Once the installation is complete, logout and log back in again.

Storage Evaluation

Two types of storage are used for the installation:

  • Shared - allocated from a dynamic NFS share backed using the OpenEBS LVM provisioner.
  • Unshared - allocated from a LVM volume group using the OpenEBS LVM provisioner.

There needs to be enough free storage available to satisfy both needs. You can use the ‘kloudspot storage’ tool to review the available storage and estimate the required storage.

sjerman@k8s-single:~$ kloudspot storage estimate
Assuming a 1 node cluster installation
Openebs not installed
?                         Do you want to install Openebs? : [? for help] (y/N)
installing OpenEBS...
Successfully installed OpenEBS.
All Pods are UP now...
Volume Groups:

- ubuntu-vg ( 498.0 GB - 398 GB free)
  on /dev/sda3
  Total Free space: 398.0 GB

Disks:
/dev/sda (500G)
/dev/sda1 (1M)
/dev/sda2 (2G) mounted as /boot
/dev/sda3 (498G) in Volume Group ubuntu-vg
/dev/mapper/ubuntu--vg-ubuntu--lv (100G) mounted as /

What features do you want to use
? Enable KloudManage No
? Enable KloudInsights No
? Enable Kloudhybrid Yes
Using ubuntu-vg for unshared volumes
Available: Shared 82 GB, Unshared 398 GB

How much storage do you want to assign to each volume
? Stream processing elasticsearch (GB) 10
? Kloudinsights database (GB) 200
Required: Shared 0 GB, Unshared 200 GB
The configuration looks OK

Prepare Kloudspot Configuration

You can use the kloudspot init command to create a configuration file for your new system. It will ask a few questions and then create a configuration file (/etc/kloudspot/values.yaml) file with the necessary configuration.

Warning

If you want to use your own StorageClass configuration, then please refer CustomStorage

sjerman@k8s-single:~$ kloudspot init
installing OpenEBS...
Successfully installed OpenEBS.
All Pods are UP now.
By default, the ingress controller will use a self signed certificate.
It is much better to use a ‘proper’ SSL certificate.

Do you want to add ssl certificate? y/n

? Enter ssl key filepath: server.key
? Enter ssl ssl cert filepath: server.cert

Initialize Kloudspot System Configuration

First basic system information...
? DNS Hostname dibble.net
? Customer Reference steve

What features should be enabled
? Enable KloudManage No
? Enable KloudInsights No
? Enable Kloudhybrid Yes
Using ubuntu-vg for unshared volumes Available:
Shared 32 GB, Unshared 49 GB

How much storage do you want to assign to each volume
? Stream processing elasticsearch (GB) 10
? Kloudinsights database (GB) 10 Required:
Shared 0 GB, Unshared 20 GB 'values.yaml'
created sucessfully.

Start Kloudspot Application

Deploy the helm chart using:

kloudspot start

The deployment will take a while to complete, use following command to monitor:

kloudspot status

Update the deploy

kloudspot update --update-helm

Uninstall the deploy

kloudspot stop

On-Prem Ports & Firewall Configuration

It is assumed that there are no port restrictions on communications between nodes.

Outbound

The follow outbound ports/paths need to to be allowed in most configurations

Purpose Destination Address Destination Port Protocol Service
Software & license install *1 *.kloudspot.com 443 TCP HTTPS
Docker images *1 https://docker.io, https://registry.k8s.io, https://quay.io 443 TCP HTTPS
Network Time *.ntp.org 123 UDP NNTP
Cisco WLC access (if required) 16113 TCP

*1 : The installation can be configured to get these images from docker.kloudspot.com or they can be sideloaded. See here for details

Inbound

Single Node

The following inbound ports need to be allowed if the function is required

Port Usage Optional
30003/UDP Aruba RTLS yes
30004/UDP Aeroscout yes
30002/TCP Meraki MV Sense MQTT yes
30005/UDP Huawei yes
30006/UDP Huawei BLE yes

Cluster

Port Usage Optional
3333/UDP Aruba RTLS yes
5555/UDP Aeroscout yes
6666/TCP Meraki MV Sense MQTT yes
7777/UDP Huawei yes
7778/UDP Huawei BLE yes

Multi-node Cluster

Overview

The Kloudspot software stack can be run on a High Availability Kubernetes cluster (including 3 or more compute nodes). As with a Single Node Install, these instructions assume the use of MicroK8S however similar approach should work with other K8S installations.

The primary requirement is that the underlying hardware must itself be highly available - no shared power, networking or physical components.

Please refer to the the MicroK8S documentation for background to these instructions.

The cluster will be configured as follow:

Access to the cluster is via a single virtual IP address shared by the cluster and exposed by a network load balancer managed by MetalLB.

There are three types of component in the architecture:

  • Stateless services with no shared storage (eg report generator)
  • Stateful services with LVM storage on each node (eg MongoDB)
  • Stateful services with OpenEBS cStor shared storage (eg Flink job manager).

If any node fails, the following happens:

  • Stateless components will fail over to other nodes.
  • Stateful components with LVM storage will continue to operate with degraded availability.
  • Stateful components with shared storage will fail over to another node.

When the failed node comes back up:

  • Stateless components will rebalance if necessary.
  • Stateful components with LVM storage will restart, automatically resynchronize and start operating with full availability.
  • Stateful components with shared storage will rebalance if necessary.

System requirements

Important

Any system used needs to support the AVX flag - most newer bare metal systems will support this. VM servers often don’t by default. Please refer to your VM server documentation.

Each node should have a minimum of the following specification:

  • 8 core
  • 32 GB RAM
  • 1 x 1TB SSD configured using LVM with 100 GB assigned to /
  • Ubuntu 22.04 Server image

The recommended spec when running both KloudInsights and KloudManage is:

  • 16 core
  • 64 GB RAM
  • 1 x 1TB SSD configured using LVM with 100 GB assigned to /
  • Ubuntu 22.04 Server image

Three nodes are required for a system to be able survive node failure, however if there is a heavy load on the system, one or more worker nodes may need to be added to the cluster to provide extra capacity.

Important

Before you start the steps below, please obtain the following:

  • A static IP Address for each node
  • A static shared IP Address to use for the load balancer
  • A DNS entry for the shared IP address
  • A TLS certificate and key to use for the shared IP address (recommended)

Configure Each System

Install Ubuntu 22.04 on each system, and then update the system to the latest libraries:

sudo apt-get update
sudo apt-get -y upgrade
Info

Take the defaults for any questions

Reboot, then install the Kloudspot tools using the following command:

curl -s https://registry.kloudspot.com/repository/files/on-prem-cluster.sh | sudo bash

Logout and log back in again.

Then:

Create a volume in the LVM volume group created during installation that can be used for shared storage. Typically, the volume group will be called ‘ubuntu-vg’, so the following command shoud work:

sudo lvcreate -L 20G -n shared ubuntu-vg
Cloning Cluster Nodes

At the point you have a configured system that you can use to create clones.

If you do this please refer to this reference to change the machine-id.

Also make sure to configure the IP addresses correctly on each node.

If you cannot assign static IPs in your DHCP server, you may need to explicitly set static IPs for the nodes. Follow these instructions if so.

  • Set the hostnames. Run the following command as appropriate on each node:
sudo hostnamectl set-hostname k8s-vm-<X>
  • Edit the /etc/hosts file: update the local address and add entries for the other IPs. Eg.
127.0.0.1 localhost
127.0.1.1 k8s-vm-1

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

192.168.1.106 k8s-vm-1
192.168.1.174 k8s-vm-2
192.168.1.192 k8s-vm-3
  • Change the IP addresses of each node from DHCP to Static.

Edit the netplan file (eg /etc/netplan/00-installer-config.yaml). Eg..

network:
  ethernets:
    ens18:
     dhcp4: no
     addresses: [192.168.1.106/24]
     routes:
     - to: default
       via: 192.168.1.254
     nameservers:
       addresses: [192.168.1.254,8.8.8.8]

Run sudo netplan apply to change…

Reboot the node.

Once this is done for each node continue to next step.

Configure The Cluster

Next set up the MicroK8S cluster following these instructions.

Basically run this command on one node to get the command to run on another node.

microk8s add-node

If you have 3 nodes, allocate all 3 as managers.

If you have more than 3 nodes, allocate 3 as managers and the rest as workers.

Run the following commands to enable the other required MicroK8S add-ons:

microk8s enable dns
microk8s enable ingress

For full high availability we need to configure a virtual shared IP and a load balancer. We use metallb for this. It needs to be configured with a fixed static IP address. Run the following commands on one of the nodes…

microk8s enable metallb:<ip address>/32

Configure Storage

Two types of storage are used for the installation:

There needs to be enough free storage available to satisfy both needs. You can use the ‘kloudspot storage’ tool to review the available storage and estimate the required storage.

kloudspot@nmsc02:~$ kloudspot storage estimate
Assuming a 3 node cluster installation
Openebs not installed
?                         Do you want to install Openebs? : [? for help] (y/N)
installing OpenEBS...
Successfully installed OpenEBS.
All Pods are UP now...
Volume Groups:
-  vg_data ( 93.1 GB - 87.1 GB free)
   on /dev/sdb1
-  vg_share ( 106.9 GB - 0.0 GB free)
   on /dev/sdb2
Total Free space: 87.1 GB

Disks:
/dev/sda (50G)
  /dev/sda1 (1M)
  /dev/sda2 (50G) mounted as /
/dev/sdb (200G)
  /dev/sdb1 (93.1G) in Volume Group vg_data
  /dev/sdb2 (106.9G) in Volume Group vg_share
    /dev/mapper/ubuntu--vg-shared (100G)

What features do you want to use
?                                      Enable KloudManage : No
?                                    Enable KloudInsights : Yes
?                                      Enable Kloudhybrid : No
Using vg_data for unshared volumes
Available: Shared 106 GB, Unshared 87 GB

How much storage do you want to assign to each volume
?                    Stream processing elasticsearch (GB) : 10
?                    Stream processing state storage (GB) : 10
?                             Kloudinsights database (GB) : 10
?                        Kafka distributed messaging (GB) : 2
?                              Zookeeper coordinator (GB) : 2
Required: Shared 20 GB, Unshared 14 GB
The configuration looks OK
Remember each node needs this amount of storage

Prepare Kloudspot Configuration

You can use the ‘kloudspot init’ command to create a configuration file for your new system. It will ask a few questions and then create a ‘values.yaml’ file with the necessary configuration.

Warning

If you want to use your own StorageClass configuration, then please refer CustomStorage

sjerman@k8s-single:~$ kloudspot init
installing OpenEBS...
Successfully installed OpenEBS.
All Pods are UP now...
# If you have multiple bd  on a node , select appropriate bd from list:

which blockdevice do you want to use for node cluster1 ?  blockdevice-b168c57f62054cfea8ee52cbde230d77

which blockdevice do you want to use for node cluster2? blockdevice-124ba28ef2874c3aa2a94967ccda6000

which blockdevice do you want to use for node cluster3 ? blockdevice-04f218481b2c48b3aa2dd1f1767c4823

Waiting for all CSPI UP...
CSPI up now!

Initialize Kloudspot System Configuration

By default, the ingress controller will
use a self signed certificate. It is much better to
use a ‘proper’ SSL certificate.

Do you want to add ssl certificate? y/n
? Enter ssl key filepath: server.key
? Enter ssl ssl cert filepath: server.cert

Initialize Kloudspot System Configuration

First basic system information...
? DNS Hostname dibble.net
? Customer Reference steve

What features should be enabled
? Enable KloudManage No
? Enable KloudInsights No
? Enable Kloudhybrid Yes
Using ubuntu-vg for unshared volumes
Available:  Shared 32 GB, Unshared 49 GB

How much storage do you want to assign to each volume
? Stream processing elasticsearch (GB) 10
? Kloudinsights database (GB) 10
Required: Shared 0 GB, Unshared 20 GB
'/etc/kloudspot/values.yaml' created sucessfully.

Deploy Kloudspot Helm Chart

Deploy the helm chart using:

kloudspot start

The deployment will take a while to complete, use following commands to monitor:

microk8s helm3 status kloudspot
microk8s kubectl get all
kloudspot status

Update the deployment

kloudspot update --update-helm

Uninstall the deployment

kloudspot stop

See here for general instructions on using Helm

Tips and Tricks

Here are some tips for debugging and diagnosing issues:

Kubernetes CLI

The CLI is available.

Dashboard

The ’nicest’ way to explore the system, access logs etc is using the dashboard. You can enable an ingress for it using the following in the ‘values.yaml’ file:

debug:
  dashboard: true

And then go to https://<ip or hostname>/k8sdash/

You can get the required token using the following command:

kubectl create token default

If you just want temporary access, you can start up a proxy on a proxy port:

microk8s dashboard-proxy

The dashboard will be available on port 10443. Authenticate using the token that prints to the console.

Debug Container

Run the following command to enable:

kloudspot debug enable

A debug shell POD will be created containing useful utilities for accessing the database, Kafka etc.. You can connect to it either via the dashboard or via the CLI:

kloudspot connect <container>

Configuration Values

All of the configuration for the helm chart is set via the ‘/etc/kloudspot/values.yml’ file.

The documentation for the available values is here: Configuration Value Reference.

Since that file will gently age, you can get current values from the helm command:

defaults from helm chart:

microk8s helm3 show values kloudspot/kloudspot

values overrides being used currently:

 microk8s helm3 get  values kloudspot

all values being used currently:

 microk8s helm3 get  values --all kloudspot

Remember that you need to update the helm chart to get the ’latest’ stuff:

 microk8s helm3 repo update

Storage Configuration

LVM Volume Group Configuration

Detailed information on LVM volume group configuration is beyond the scope of these instructions. See here for a readable guide.

However, two common scenarios are as follows:

Default Ubuntu installation on a single disk.

The Ubuntu installer, by default, creates a LVM volume group occupying the whole disk and then allocates 50% of the VG or 30 GB from the group as the root directory (’/’). The free space on the disk is then available to create other volumes.

Two Disks.

If you have a separate disk allocated for LVM, you probably need to create a Volume Group. You can use the following commands to identify a disk and provision it for LVM

List available disks:

sudo lsblk -p  # find disks
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
...
├─/dev/vda1                           252:1    0     1M  0 part
├─/dev/vda2                           252:2    0     2G  0 part /boot
└─/dev/vda3                           252:3    0    98G  0 part
  ├─/dev/mapper/ubuntu--vg-ubuntu--lv 253:0    0    49G  0 lvm  /
  └─/dev/mapper/ubuntu--vg-gluster    253:1    0    49G  0 lvm
/dev/vdb                              252:16   0   200G  0 disk
└─/dev/vdb1                           252:17   0   200G  0 part

Create an LVM Volume Group for local provisioning

sudo pvcreate /dev/vdb1
sudo vgcreate vg_data  /dev/vdb1

Logical Volume Creation

Again beyond the scope of this… but two examples:

Consume all available space in the VG:

sudo lvcreate -l 100%FREE -n cstor ubuntu-vg

Create a 20GB logical volume:

sudo lvcreate -L 20G -n cstor ubuntu-vg

Troubleshooting

There are a few potential issues that can prevent proper startup. The notes below all assume that you have started up the Kubernetes Dashboard.

Storage Provisioning

The most common reason for components not starting up correctly is incorrect storage provisioning. When the Helm chart is set up, a number of Persistent Volume Claims (PVC) are set up. The claims are requests of the underlying storage provisioning for a volume with a specific size, access model and storage class.

Single Node

In a single node, all except one PVC will provisioned using OpenEBS Local LVM from the LVM volume group defined by storage.local.vg.

So ensure that this Volume Group has enough free space. If you look at the PVC list in the dashboard you should be able to spot the issue.

The remaining PVC (for Flink job manager state) will provisioned using OpenEBS NFS storage. This type of storage is used to allow the storage to be shared between multiple PODS ‘ReadWriteMany’. The storage will be provisioned from the root file system.

Multiple Nodes

In a cluster, two types of storage class are used:

Again you should ensure that there is enough of each type

TLS

If your HTTPS certificate is not working correctly, you can look at the Ingress logs for possible issues with the certificate.

Image Downloads

There are a few ‘containerd’ commands that can be useful when figuring out what is happening with image downloads:

microk8s ctr images check

Shows what images are being retrieved (which might include multiple image layers).

microk8s ctr content active

Shows active content (images/manifests) transfers.

Limited Internet Access

Kubernetes pulls images from the network according to the ImagePullPolicy which is set to IfNotPresent for the Kloudspot application containers.

The rule works as follows:

  • If the image tag is not ’latest’ then the image is only pulled if it is not already present locally.
  • If the image is ’latest’ then the image will always be pulled.

So don’t set a tag to ’latest’ for an offline installation!

Docker Registry Access needed:

By default, the following URLs need to be accessible in order allow images to be loaded. If this is not possible, then you will have to sideload the images.

Side Loading Images

The easiest way to sideload images is as follows:

  • Set up a system with all of the required images and run the following command:
microk8s images export-local > images.tgz
  • Copy the file images.tar to the off-line system.

  • Then on the offline system, run the following command:

microk8s images import < images.tgz

Click here for more details

Once, the images are loaded, you can limit garbage collection by running the following script to add a ’label’ to the images.

/opt/kloudspot/bin/label-images 

Subsections of Advanced Topics

Standalone Receiver Setup for Cisco WLC

The data receiver can be set up in standalone mode to work as a proxy. In addition it can be configured as a High Availability/Scalable N+1 configuration using zookeeper for group coordination and leader election. These instructions only cover single server/non high available installation.

These instructions are for a single node non-HA receiver.

If required, the receiver can be set up in either an Active/Active (using 3 or more hardware independent nodes) or Active/Passive (using two nodes). Please ask if you need to implement these configurations.

Configuration

Please ensure that the server configuration is completed first - talk to Kloudspot support to get this done.

Proxy VM requirements

OS/Resources:

The VM running the proxy should have the following specification:

  • RAM: 8GB
  • Disk :50 GB
  • 4 core
  • OS: Ubuntu 18.04

Firewall:

The following routes should be enabled:

  • From VM to WLC : port 16113
  • From VM to Kloudspot Analytics Platform : port 9094

Proxy Receiver Setup.

Set up and update/upgrade a clean installation of Ubuntu 18.04 LTS.

When installing from scratch, make sure to install the open SSH server to allow remote access.

Test connectivity to the WLC and the Kloudspot Analytics Server from the VM:

$ nc -w2 -vz <WLC IP> 16113
$ nc -w2 -vz <Kloudspot server IP> 9094

Add Kloudspot’s official GPG public key:

 $ curl -fsSL https://registry.kloudspot.com/repository/files/kloudspot.gpg.key | sudo apt-key add -

Verify that you now have the key with the fingerprint 7DD9 F762 BBDB FBC9 3103 4270 0B15 B423 21FA FC35, by searching for the last 8 characters of the fingerprint.

$ sudo apt-key fingerprint 21FAFC35
pub   rsa2048 2019-12-02 [SC] [expires: 2021-12-01]
      7DD9 F762 BBDB FBC9 3103  4270 0B15 B423 21FA FC35
uid           [ unknown] Steve Jerman <steve@kloudspot.com>
sub   rsa2048 2019-12-02 [E] [expires: 2021-12-01]

Use the following command to set up the repository.

$ sudo add-apt-repository \
  "deb [arch=amd64] https://registry.kloudspot.com/repository/kloudspot-apt/  bionic main"

Install the receiver and its required components java, zookeeper

$ sudo apt-get update
$ sudo apt-get install kloudspot-receiver

Start zookeeper

$ sudo service zookeeper start
$ sudo systemctl enable zookeeper

Kloudspot Support will provide a client.truststore.pkcs file and password, copy this to /etc/kloudspot and edit the /etc/kloudspot/receiver.yml configuration to set the password and server address:

kafka:
  servers: <kloudspot server IP>:9094
  ssl: true
  truststore-location: /etc/kloudspot/client.truststore.pkcs
  truststore-password: replace-me

Kloudspot Internal Note: See here for generation instructions

Run receiver to see the connection command.

$ sudo -H -u kloudspot /usr/local/kloudspot/receiver/run.sh

You can stop the script (Cntl C) as soon as you see this:

**************************

Run this command on the WLC
  config auth-list add sha256-lbs-ssc <MAC Address> <SHA256>

********************

Run the specific command shown in program log from the above step on the WLC:

Then edit the /etc/kloudspot/receiver.yml and add the WLC host IP:

    standalone:      
        enabled: true
        connections:
        - type: wlc
          host: <WLC IP>

At this point installation should be complete. You can start up the receiver as a service with the following commands:

$ sudo service kloudspot-receiver start
$ sudo systemctl enable kloudspot-receiver

You can see the log using:

$ sudo journalctl -u kloudspot-receiver -f
$ sudo journalctl -u kloudspot-receiver --since "10min ago"

Generating the WLC SSL Connection File

The connection to the WLC is authorized using an MAC address and an SSL file. In order to regenerate this file following the following steps:

  1. Edit the /etc/kloudspot/receiver.yml file. Remove the current MAC Address and set the keystore to an empty writeable location, also change the password if desired:
push:  
..  
   nmsp:  
      inputBufferSize: 48768
      macAddress: '50:D3:7B:5B:70:F8'
      keystore:
         password: **erHSbFfpKWLf**`  
         file: file:/tmp/wlc-keystore.pks``
  1. Run receiver to see the connection command.
    $ sudo -H -u kloudspot /usr/local/kloudspot/receiver/run.sh
    ...
    **************************
    Add this values to the config file (push.nmsp.macAddress) :
    MAC: 50:D3:7B:5B:70:F8

    Run this command on the WLC
      config auth-list add sha256-lbs-ssc 50:D3:7B:5B:70:F8 fecb74538bb6be79f33b4dc23951552cd86523c0e563b5ac13070bf4205e0538
    ********************

Stop the receiver as soon as you see the connection command.

  1. Copy the generated keystore (/tmp/wlc-keystore.pks) to /etc/kloudspot and edit /etc/kloudspot/receiver.yml as follows:
push:
...
    nmsp:
...
        macAddress: '50:D3:7B:5B:70:F8'
        keystore:
           password: erHSbFfpKWLf
           file: file:/etc/kloudspot/wlc-keystore.pks

Upgrade

The following procedure should be followed to upgrade the proxy receiver. Note that downgrade is not suupported.

Preparation

Prior to doing the upgrade make sure to backup your system.

  1. Backup VM:
    • Ideally, have a snapshot available to restore in case of issues.
  2. File Backup:
    • Take a copy of all files located in /etc/Kloudspot.

Upgrade Process

  1. Stop Services:`
   sudo service kloudspot-receiver stop
   sudo service zookeeper stop
  1. Update the KloudInsights instance that will be receiving data. This will likely need liason with Kloudspot operations.

  2. Update Receiver:

   sudo apt-get update
   sudo apt-get upgrade
   <reboot>

The upgrade process is now complete.

Post-Upgrade Checks

Perform the following checks to ensure the successful completion of the upgrade:

  1. Check Java Version:
   java  -version
   The version should be 17.
  1. Check Receiver Logs:
   sudo journalctl  -u kloudspot-receiver  -f

There should be no errors displayed.

Additional Considerations

  • Ubuntu Version:

    • The server should be running Ubuntu 18.04.5 LTS, which is still in support.
  • Snapshot Backup:

    • Data will be lost for the duration the receiver is down.
  • Checking Component Versions:

    • Use the following command to list available versions:
      apt list kloudspot-receiver  -a
  • Checking Specific Component Version:

    • To install a specific version, use:
     sudo apt-get install  -y kloudspot-receiver=<version>

Questions and Answers

  1. Can we use the following commands instead?
    apt-get install  -y openjdk-17-jdk
    apt-get install  -y kloudspot-receiver

Because our VPN software might be included in the package list and we don’t want to upgrade any packages except regarding Kloudspot-receiver.

Ans. Yes, that approach is acceptable.

  1. What version should kloudspot-receiver be upgraded to? Also, how can we confirm the version after upgrading? Will “kloudspot-receiver -version” work?

Ans. Ask Kloudspot for the appropriate version to upgrade to. To confirm the version, use the following command:

apt list kloudspot-receiver  -a

Using a custom StorageClass

Most standard installatations on MicroK8S use OpenEBS for storage and are configured automatically. However, some installations might need a custom configuration - for example to use NAS storage.

It is possible to configure the system to use custom Kubernetes StorageClasses as follows:

Warning

You need to create your own custom StorageClass for RWX and RWO mode. Please refer to Storage Classes in the Kubernetes documentation.

You can use the ‘kloudspot init –custom-storage’ command to create a configuration file for your new system. It will ask a few questions and then create a ‘values.yaml’ file with the necessary configuration.

sjerman@k8s-single:~$ kloudspot init --custom-storage
?                                  Storage Configurations : Use custom storage class.
Use custom storage class.
?                            StorageClass for RWO storage : Name: microk8s-hostpath, Provisioner: microk8s.io/hostpath
?                            StorageClass for RWX storage : Name: microk8s-hostpath, Provisioner: microk8s.io/hostpath
Now using custom StorageClass for configuration
Initialize Kloudspot System Configuration

By default, the ingress controller will
use a self signed certificate. It is much better to
use a ‘proper’ SSL certificate.

Do you want to add ssl certificate? y/n
? Enter ssl key filepath: server.key
? Enter ssl ssl cert filepath: server.cert

Initialize Kloudspot System Configuration

First basic system information...
? DNS Hostname dibble.net
? Customer Reference steve

What features should be enabled
? Enable KloudManage No
? Enable KloudInsights No
? Enable Kloudhybrid Yes
Using ubuntu-vg for unshared volumes
Available:  Shared 32 GB, Unshared 49 GB

How much storage do you want to assign to each volume
? Stream processing elasticsearch (GB) 10
? Kloudinsights database (GB) 10
Required: Shared 0 GB, Unshared 20 GB
'/etc/kloudspot/values.yaml' created sucessfully.

Command Line Interface Overview

The Command Line Interface (CLI) in Ubuntu is a powerful text-based tool that offers efficiency, automation, and control. Accessed through the Terminal, it allows users to execute commands for file manipulation, system management, networking, and more. CLI’s advantages include resource efficiency, remote server management, and deeper system understanding. It complements the graphical interface, empowering users to customize their Ubuntu experience, perform complex tasks, and boost productivity. With scripting capabilities, the CLI becomes indispensable for system administrators, developers, and experienced users, providing unparalleled flexibility and insights into the Ubuntu operating system.

Configuration File Reference

kloudspot

Version: 1.1.0 Version: 1.1.0 Type: application Type: application AppVersion: 3.0.2461 AppVersion: 3.0.2461

A Helm chart for Kloudspot KloudInsights & KloudManage Applications

Requirements

Repository Name Version
https://charts.bitnami.com/bitnami kafka 18.0.3
https://charts.bitnami.com/bitnami mariadb-galera 7.5.0
https://charts.bitnami.com/bitnami mongodb 13.6.2

Values

Key Type Default Description
application_secret_name string "kloudspot-secret" name of kloudspot bootstrap secret
debug.arg string nil run argument (for replaying test data)
debug.dashboard bool false Enable ingress access to the dashboard Run ‘microk8s kubectl create token default’ to get the token.
debug.enabled bool false Enable debug container
debug.persistence object {"size":"10Gi"} size of volume for debug container
debug.privateApi bool false Show private swagger docs
elasticsearch.persistence.size string "2Gi" Size of volume used for elasticsearch
elasticsearch_nms.esmemory string "4g" Elasticsearch Memory Allocation
elasticsearch_nms.persistence.size string "50Gi" Size of volume used for legacy Elasticsearch
feature.demoData bool false Load KloudInsights demo data
feature.digitaltwin bool false Load DigitalTwin app
feature.fiware bool false FiWare
feature.fiware_iot bool false
feature.frsvision bool false Load frs-vision
feature.full bool false Enable all KloudInsights functionality
feature.ha bool false Configure cluster usage
feature.hybrid bool true Enable KloudHybrid components
feature.kloudinsights bool true Enable KloudInsights components
feature.kloudmanage bool false Enable KloudManage components
feature.teams bool false Load teams app
fiware string nil
frsvision.extra_env string nil
gateway.apikey string nil Gateway API key
gateway.apisecret string nil Gateway API secret
global.storageClass string "openebs-local-kloudspot" Storage class used for dependency charts (Kafka/Zookeeper/MongoDB)
imagePullSecrets string "dockerregistrykey" Secret used to access Kloudspot Docker Private Registry Please reach out to Kloudspot team for username and password
ingress object {"annotations":{},"spec":{}} Custom ingress - replaces default
jobmanager.memory string "4096" Memory allocated for Flink Job Manager (MB).
jobmanager.persistence.size string "1Gi" Size of volume used for state storage
kafka.commonLabels.tier string "base"
kafka.logRetentionBytes string "_104857600"
kafka.logRetentionHours int 48
kafka.logSegmentBytes string "_104857600"
kafka.persistence.size string "5Gi" Size of volume used for kafka
kafka.zookeeper.persistence.size string "1Gi" Size of volume used for zookeeper
kloudmanage.extra_env string nil Map containing custom environment variables for receovers container (all values need to be strings)
kloudmanage.persistence.size string "50Gi" Volume size used for all storage types
license_secret_name string "kloudspot-license" name of offline license
mariadb-galera.commonLabels.tier string "base"
mariadb-galera.existingSecret string "kloudspot-secret"
mariadb-galera.persistence.labels.backup string "true"
mariadb-galera.persistence.size string "4Gi"
mongo_db string "jameson" name of Mongo database
mongodb.commonLabels.backup string "true"
mongodb.commonLabels.tier string "base"
mongodb.persistence.annotations.backup string "true"
mongodb.persistence.size string "20Gi" Size of volume used for database storage
mqtt_svc.credentials.password string "kloudspot123" MQTT password @default not set
mqtt_svc.credentials.username string "kloudspot" MQTT username @default not set
namespace string "default" Namespace to deploy KloudInsights The namespace to use to deploy KloudInsights components, if left empty will default to .Release.Namespace (aka helm –namespace).
namespaceCreate bool false Create a K8S namespace if it doesn’t exist
receiverservice.extra_env string nil Map containing custom environment variables for receovers container (all values need to be strings)
receiverservice.heap string "2048M" Maximum Heap size for receiver service
storage.local.class string "openebs-local-kloudspot" Storage class used for single node storage volumes
storage.local.vg string "ubuntu-vg"
storage.shared.class string "openebs-kernel-nfs" Storage class used for volumes shared across a cluster
system.customer_ref string nil Identifier for the customer system (used for licensing)
system.external_proxy bool false Assume use of external reverse prozy so allow access via http
system.hostname string nil The DNS hostanme for the system (required if using TLS cecrtificate)
system.ip_addr string nil The IP address for the system (required for KloudManage)
taskmanager.memory string "4096" Memory allocated for Task Manager (MB).
taskmanager.memoryManaged string "0.6" % of Memory allocated for managed memory
taskmanager.taskslots string "8" Number of Task Slots for Flink TaskManager
tls_secret string "kloudspot-tls" name of the TLS secret
versions.digitaltwin string set to latest release Version of DMS container
versions.dms string set to latest release Version of DMS container
versions.fiware_broker string "1.2.0-PRE-1305" Version of FiWare Orion-LD
versions.flink string set to latest release Version of Flink container
versions.frsvision string "latest" Version of frsVision
versions.insightsapp string set to latest release Version of insights-app container
versions.kloudmanage string set to latest release Version of KloudManage container
versions.receiverservice string set to latest release Version of receiver-service container
versions.staticcontent string set to latest release Version of DMS container
versions.teams string "2.0.257" Version of DMS container
versions.webui string set to latest release Version of kloudinsights container
webui.extra_env string nil Map containing custom environment variables for KloudInsights container (all values need to be strings)
webui.extra_profiles list [] Extra Profiles to add to container
webui.globalUser bool false ‘Global User flag’ - always set for hybrid
webui.heap string "4096M" Maximum Heap size for webui container

Autogenerated from chart metadata using helm-docs v1.11.0

SSH Shared Key Authentication

These instructions apply to MacOS

We recommend the use of SSH Shared Keys to secure CLI access to the system. The following steps can be used to set it up.

Create a Key Pair

  1. On a local system (e.g. your laptop) create a key pair using the following command:
ssh-keygen

accept the default location.

  1. It will ask for a passphrase. You can either set the passphrase to blank or use ssh-agent to cache the pass phrases.

  2. The utility will create:

    • A private key: id_rsa This is the private key that needs to be used by everyone who will log in.
    • A public key: id_rsa.pub. This is added to the system you want to login to.

Install on Remote systems

  1. Run the following command to copy the public key to a remote system.
ssh-copy-id <system>
  1. You should now be able to log in to the remote host …
ssh <system>

(Optional) Disable Password login on remote host

  1. Edit /etc/ssh/sshd_config and set following parameter to no:
PasswordAuthentication no

2. Then restart ssh server:

sudo service ssh restart

Securing MicroK8S

These notes apply to version v1.26.6 of MicroK8S.

See:

To list ciphers on port:

sudo snap install nmap
nmap --script ssl-enum-ciphers -p 16443 192.168.1.97

Ports

Port Usage Notes
16443 api server
10259 kube-scheduler
10257 kube-controller
10250 kubelet
25000 cluster-agent Can’t control ciphers

Update Configuration

Edit:

  • /var/snap/microk8s/current/args/kube-apiserver
  • /var/snap/microk8s/current/args/kube-scheduler
  • /var/snap/microk8s/current/args/kube-controller-manager
  • /var/snap/microk8s/current/args/kubelet

Add:

--tls-min-version=VersionTLS12
--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA

You cannot directly edit cipher suites for cluster-agent. So either turn it off (microk8s disable ha-cluster) or ..

Edit /var/snap/microk8s/current/args/cluster-agent and add:

--min-tls-version=tls13

Restart Microk8s

Run:

sudo snap restart microk8s

You can then check the port usage, using nmap as described above.

Monitoring

For a HA on-prem installation it is a good idea to set up monitoring and alerting so that you can monitor the state of the cluster and get alerts for issues such as memory limits exceeded or low disk space.

Once the kloudspot platform is installed, the necessary files will be installed in /opt/kloudspot/monitoring:

  • values.yaml : Helm chart configuration.
  • dashboard-config.yaml : Loader for Kloudspot specific dashboard.
  • monitors.yaml : Custom POD and Service monitor configurations to gather prometheus information from Kloudspot components.

Installation

Install the following Helm repo:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts   
helm repo update

Modify the /etc/monitoring/values.yaml file to suit. Typically this will only mean setting the Grafana URL:

  grafana.ini:
    server:
      root_url: https://localhost/grafana

Install the helm chart:

cd /opt/kloudspot/monitoring
helm install mtr  -f values.yaml --create-namespace -n mtr prometheus-community/kube-prometheus-stack

Once started, you can login to the Grafana instance with the following credentials:

  • URL: https:\<server>/grafana/
  • Username: admin
  • Password: prom-operator

You can also access the Prometheus UI using port forwarding:

kubectl port-forward -n mtr service/prometheus-operated 9090:9090

Load Kloudspot Configuration

Next install some POD Monitors and Service monitors specific to the Kloudspot Platform

sjerman@steve-nuc:/opt/kloudspot/monitoring$ kubectl apply -f monitors.yaml 
servicemonitor.monitoring.coreos.com/kloudspot-flink-job-metrics created
podmonitor.monitoring.coreos.com/kloudspot-flink-tm-metrics created
servicemonitor.monitoring.coreos.com/kloudspot-web-ui-metrics created

Add a custom dashboard for the Kloudspot Platform:

sjerman@steve-nuc:/opt/kloudspot/monitoring$ kubectl apply -f dashboard-configmap.yaml 
configmap/kloudspot-grafana-dashboard created

Enable Kafka Monitoring. Edit /etc/kloudspot/values.yaml:

kafka:
...
    metrics:
       kafka:
          enabled: true
       serviceMonitor:
          enabled: true
          labels:
            release: mtr

Then restart the Kloudspot services:

kloudspot update -u

You can also use the Grafana administration interface to create custome alerts and dashboards as required.

Once you have the monitors and dashboard loaded you will be able to see some data:

grafana

Reference