Upgrading Gitlab on Debian: from stretch to buster

The one thing I most admire about Gitlab is its installation experience. The upgrades went through very smoothly all the time. This time though when I did “apt update” and “apt upgrade” it gave the following error message.

Err:7 https://packages.gitlab.com/gitlab/gitlab-ee/debian stretch InRelease
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 3F01618A51312F3F
.....
.....
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://packages.gitlab.com/gitlab/gitlab-ee/debian stretch InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 3F01618A51312F3F
W: Failed to fetch https://packages.gitlab.com/gitlab/gitlab-ee/debian/dists/stretch/InRelease  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 3F01618A51312F3F
W: Some index files failed to download. They have been ignored, or old ones used instead.

Based on the error and after exploring Gitlab installation page for Debian here, it was clear that the repository for gitlab needed to be updated. Only the step 2(Add the GitLab package repository and install the package) in that guide has to be executed.

Once the execution of the “script.deb.sh” completes, running apt upgrade did the rest. Gitlab is now upgrade to version 12.9.3.

Linux: Setting static IPv4 address using terminal

Once logged into the linux machine, edit /etc/network/interfaces file using your favorite text editor. You would see something like this at the end of the file:

iface eno1 inet dhcp

Change the content of this line to say static instead of dhcp. Also add the other parameters as shown in the below code.

iface eno1 inet static
    address 192.168.1.10
    gateway 192.168.1.1
    netmask 255.255.255.0
    dns-nameservers 192.168.1.1

Additionally to set IPv6 address to auto after the above set of lines as shown below.

iface eno1 inet6 auto

Save and exit the editor. Reboot the system.

Kubernetes cluster using Debian Buster on bare metal

This article is based on two separate articles that are found article 1 and article 2. The former describes using Ansible how to create a three node bare metal cluster(one master and two workers) on Debian Stretch. The later talks about creating a single node Kubernetes cluster on Debian Buster. This article is about using techniques and procedures from both of those articles and few instructions from Kubernetes current documentation to create a three node cluster(one master and two workers) using Debian Buster.

Steps to be followed:

  1. Install Debian Buster
  2. Install Kubernetes dependencies
  3. Perform Kubernetes configuration

Download and install Debian Buster in three bare metal boxes. In the component selection dialog make sure to check SSH server component. The setup that was performed for this article also excluded installing any desktop environment components. Once the installation is complete, configure static network by updating /etc/network/interfaces file.

On the local system(Linux or WSL) configure password less remote access to these three servers using these instructions. Install Ansible if is not already installed on the local system. Prepare the inventory list in a file called hosts as described in article 1. This file will list the master and the worker systems.

Create files initial.yml, kube-dependencies.yml, master.yml and workers.yml as described in the article 1 referred in this post. In kube-dependencies.yml make sure to update the Kubernetes version to the latest supported version under release notes and version skew section here. In the master.yml file update the url under install pod network(flannel) to the latest file location here.

Create a kube-firewall.yml file like below. This refers to the article 2 referred in this post. For more information Kubernetes documentation can also be referred.

- hosts: all
  become: yes
  tasks:
   - name: install ufw
     apt:
       name: ufw
       state: present
       update_cache: true

   - name: Enable UFW
     ufw:
       state: enabled

   - name: Allow all ssh access
     ufw:
       rule: limit
       port: ssh
       proto: tcp

   - name: Allow all access to tcp port 10251
     ufw:
       rule: allow
       port: '10251'
       proto: tcp

   - name: Allow all access to tcp port 10255
     ufw:
       rule: allow
       port: '10255'
       proto: tcp

- hosts: master
  become: yes
  tasks:

   - name: Allow all access to tcp port 6443
     ufw:
       rule: allow
       port: '6443'
       proto: tcp

   - name: Allow all access to tcp port 2379
     ufw:
       rule: allow
       port: '2379'
       proto: tcp

   - name: Allow all access to tcp port 2380
     ufw:
       rule: allow
       port: '2380'
       proto: tcp

   - name: Allow all access to tcp port 10250
     ufw:
       rule: allow
       port: '10250'
       proto: tcp

   - name: Allow all access to tcp port 10252
     ufw:
       rule: allow
       port: '10252'
       proto: tcp

Create a kube-flannel-firewall.yml file as specified below. This opens up firewall port for flannel network add on as being instructed here.

- hosts: all
  become: yes
  tasks:

   - name: Allow all access to tcp port 8285
     ufw:
       rule: allow
       port: '8285'
       proto: udp

   - name: Allow all access to tcp port 8472
     ufw:
       rule: allow
       port: '8472'
       proto: udp

Create a kube-nodeport-firewall.yml to open up node ports in the cluster. Please refer to the Kubernetes documentation here for more explanation.

- hosts: all
  become: yes
  tasks:

   - name: Allow all access to tcp port 30000-32767
     ufw:
       rule: allow
       port: 30000:32767
       proto: tcp

Now execute the files using Ansible in the below order. For examples how to execute ansible scripts refer to the article 1.

  1. initial.yml
  2. kube-dependencies.yml
  3. master.yml
  4. workers.yml
  5. kube-firewall.yml
  6. kube-flannel-firewall.yml
  7. kube-nodeport-firewall.yml

At this time due to reasons specified in this section, the Kubernetes executables has to be marked hold. Log in to each box and run the below command.

$ sudo apt-mark hold kubelet kubeadm kubectl

At this time the Kubernetes cluster should be ready. This can be checked by logging into the master node and running kubectl get nodes command. If this shows anything else than ready more troubleshooting information including logs can be found in this article. Also this section in Kubernetes documentation can help.

To understand Kubernetes architecture in detail I would recommend this article here.

Linux: SSH login without password

Remote Linux servers can be administered using SSH. Normally, local machine can connect to these servers using SSH using user id/password combination. Local machine can also be configured to remote into these servers using SSH without password.

To make it work we have to generate a public/private key pair using ssh-keygen, and subsequently copy the public key using ssh-copy-id to remote server’s authorized_keys file.

The simplest use is:

$ ssh-keygen

$ ssh-copy-id -i ~/.ssh/id_rsa.pub remote_server

For more information, there is a very informative blog written about this here.