<?xml version="1.0" encoding="utf-8"?>

<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
  <generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator>
  <link href="https://blog.andygol.co.ua/en/feed.xml" rel="self" type="application/atom+xml"/>
  <link href="https://blog.andygol.co.ua/en" rel="alternate" type="text/html" hreflang="en" />
  <updated>2026-01-27T12:13:08+00:00</updated>
  <id>https://blog.andygol.co.ua/en/feed.xml</id>
  <title type="html">Andrii Holovin – Blog | </title>
  <subtitle>Personal blog of Andriy Holovin. A little about everything.</subtitle>
  <author>
      <name>Andrii Holovin</name>
    </author>
    <entry xml:lang="en">
      <title type="html">Deploying an HA Kubernetes cluster with an external etcd topology on a local machine</title>
      <link href="https://blog.andygol.co.ua/en/2026/01/12/ha-k8s-cluster/" rel="alternate" type="text/html" title="Deploying an HA Kubernetes cluster with an external etcd topology on a local machine"/>
      <published>2026-01-12T06:30:00+00:00</published>
      <updated>2026-01-12T06:30:00+00:00</updated>
      <id>https://blog.andygol.co.ua/en/2026/01/12/ha-k8s-cluster</id>
      <content type="html" xml:base="https://blog.andygol.co.ua/en/2026/01/12/ha-k8s-cluster/">
        &lt;p&gt;In this guide, we will explore the steps to deploy a High Availability (HA) Kubernetes cluster with an external etcd topology on a local machine running macOS. We will use Multipass to create virtual machines, cloud-init for their initialization, kubeadm for cluster initialization, HAProxy as a load balancer for control plane nodes, Calico as a Container Network Interface (&lt;a href=&quot;https://www.cni.dev&quot;&gt;CNI&lt;/a&gt;), and &lt;a href=&quot;https://blog.andygol.co.ua/en/2025/12/12/k8s-cluster-with-kubeadm/#step-7-install-metallb&quot;&gt;MetalLB for load balancing traffic to worker nodes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This guide is intended both for self-study and for taking the first steps in deploying a production-level cluster. Each step is explained in detail, with descriptions of components and their roles.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2026/01/2026-01-12-multipass-ha-k8s-cluster.png&quot; alt=&quot;screenshot of Multipass with 10 virtual machines&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;cluster-architecture&quot;&gt;Cluster Architecture&lt;/h2&gt;

&lt;p&gt;Highly available Kubernetes clusters are used in production environments to ensure the continuous operation of applications. Redundancy of key cluster components allows avoiding downtime in case of failure of individual control plane nodes or etcd.&lt;/p&gt;

&lt;div style=&quot;display: flex; justify-content: center; margin: 30px 0;&quot;&gt;
  &lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/03kdtcJZaQ8?si=gQK1f5iuz6K5CynL&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;
&lt;/div&gt;

&lt;h3 id=&quot;components&quot;&gt;Components&lt;/h3&gt;

&lt;p&gt;We will deploy the cluster using kubeadm on virtual machines running Ubuntu, created by Multipass and initialized using cloud-init.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Multipass&lt;/strong&gt; — is a tool for quickly creating Ubuntu virtual machines in a cloud-like fashion on Linux, macOS, and Windows. It provides a simple yet powerful command-line interface that allows you to quickly access Ubuntu or create your own local mini-cloud.&lt;/p&gt;

    &lt;p&gt;Local development and testing can be challenging, but &lt;a href=&quot;https://multipass.run/&quot;&gt;Multipass&lt;/a&gt; simplifies these processes by automating the deployment and teardown of infrastructure. Multipass has a library of ready-to-use images that can be used to launch specialized virtual machines or your own custom virtual machines, configured using the powerful cloud-init interface.&lt;/p&gt;

    &lt;p&gt;Developers can use Multipass to prototype cloud deployments and create new, customized Linux development environments on any machine. Multipass is the fastest way for Mac and Windows users to get an Ubuntu command line in their systems. You can also use it as a sandbox to try new things without affecting the host machine and without the need for dual-booting.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Cloud-init&lt;/strong&gt; — is an industry-standard method of initialization, present in many distributions, for creating cloud instances on various platforms. It is supported by all major public cloud service providers, private cloud infrastructure providers, and bare-metal installations.&lt;/p&gt;

    &lt;p&gt;During boot, &lt;a href=&quot;https://cloud-init.io&quot;&gt;cloud-init&lt;/a&gt; detects the cloud it is running in and initializes the system accordingly. Cloud instances are automatically provided with networking, storage, SSH keys, packages, and other pre-configured system components on first boot.&lt;/p&gt;

    &lt;p&gt;Cloud-init provides the necessary linkage between the cloud instance startup and its connection so that it works as expected.&lt;/p&gt;

    &lt;p&gt;For cloud users, cloud-init provides cloud instance configuration management at first boot without the need to manually install required components. For cloud service providers, it provides instance initialization that can be integrated with your cloud.&lt;/p&gt;

    &lt;p&gt;If you want to learn more about what cloud-init is, why it is needed, and how it works, read the &lt;a href=&quot;https://cloudinit.readthedocs.io/en/latest/explanation/introduction.html#introduction&quot;&gt;detailed description&lt;/a&gt; in its documentation.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Kubeadm&lt;/strong&gt; — is a tool designed to provide the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm init&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm join&lt;/code&gt; commands as the best “quick practical ways” to create Kubernetes clusters.&lt;/p&gt;

    &lt;p&gt;&lt;a href=&quot;https://andygol-k8s.netlify.app/docs/reference/setup-tools/kubeadm/&quot;&gt;Kubeadm&lt;/a&gt; performs the necessary actions to get a minimal viable cluster up and running. By design, it only deals with the cluster deployment process, creating machine instances is out of its scope. Installing various additional components, such as the Kubernetes Dashboard, monitoring tools, and cloud-specific overlays, is also not part of its tasks.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Etcd with external topology&lt;/strong&gt;. &lt;a href=&quot;https://andygol-etcd.netlify.app/&quot;&gt;Etcd&lt;/a&gt; is a distributed key-value store with high consistency that provides a reliable way to store data that needs to be accessed by a distributed system or a cluster of machines. It correctly handles leader elections during network partitions and can withstand machine failures, even leader nodes.&lt;/p&gt;

    &lt;p&gt;Kubernetes uses etcd to store all of its configuration and cluster state. This includes information about nodes, pods, network configuration, secrets, and other Kubernetes resources. The high-availability topology of a Kubernetes cluster involves two placement options for etcd: &lt;a href=&quot;https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/ha-topology/#stacked-etcd-topology&quot;&gt;stacked etcd topology&lt;/a&gt; and &lt;a href=&quot;https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/ha-topology/#external-etcd-topology&quot;&gt;external etcd topology&lt;/a&gt;. In this guide, we will focus on the external etcd topology, where etcd is deployed separately from the Kubernetes control plane nodes. This provides better isolation, scalability, and flexibility in managing the cluster.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;HAProxy&lt;/strong&gt;. To distribute traffic to the control plane nodes, we will use the load balancer &lt;a href=&quot;https://www.haproxy.org/&quot;&gt;HAProxy&lt;/a&gt;. This will allow us to ensure high availability of the Kubernetes API server. In addition, we will deploy local HAProxy instances on each control plane node to access the etcd cluster nodes.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Calico&lt;/strong&gt;. Since &lt;strong&gt;kubeadm&lt;/strong&gt; does not create the network in which pods operate, we will use &lt;a href=&quot;https://projectcalico.org/&quot;&gt;Calico&lt;/a&gt;, a popular tool that implements the Container Network Interface (CNI) and provides network policies. It is a unified platform for all Kubernetes networking, network security, and observability needs, working with any Kubernetes distribution. Calico simplifies network security enforcement using network policies.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;cluster-topology&quot;&gt;Cluster Topology&lt;/h3&gt;

&lt;pre&gt;&lt;code class=&quot;language-mermaid&quot;&gt;graph TB
  subgraph C [Control Plane and etcd]
    direction TB
    subgraph CP [Control Plane Nodes]
        direction LR
        CP1 --&amp;gt; CP2
        CP2 --&amp;gt; CP1
        CP1 --&amp;gt; CP3
        CP3 --&amp;gt; CP2
        CP2 --&amp;gt; CP3
        CP3 --&amp;gt; CP1
    end

    subgraph E [etcd cluster]
        direction LR
        E1 --&amp;gt; E2
        E2 --&amp;gt; E1
        E1 --&amp;gt; E3
        E3 --&amp;gt; E2
        E2 --&amp;gt; E3
        E3 --&amp;gt; E1
    end

    subgraph HA [Keepalived
    10.10.0.100]
      HA1(HAProxy
      10.10.0.101) &amp;lt;-----&amp;gt;
      HA2(HAProxy
      10.10.0.102)
    end

    CP &amp;lt;--&amp;gt; HA
    E &amp;lt;--&amp;gt; CP

  end

    W1(Worker
    Node 1
    10.10.0.31)
    W2(Worker
    Node 2
    10.10.0.32)
    W3(Worker
    Node 3
    10.10.0.33)
    WN(Worker
    Node …
    10.10.0.9X)

    HA --&amp;gt; PN(Calico CNI)
    PN --&amp;gt; W1 &amp;amp; W2 &amp;amp; W3 &amp;amp; WN--&amp;gt; WLB &amp;lt;--&amp;gt; U(Users)

    CP1(Control Plane
    Node 1
    10.10.0.21)
    CP2(Control Plane
    Node 2
    10.10.0.22)
    CP3(Control Plane
    Node 3
    10.10.0.23)
    E1(etcd
    Node 1
    10.10.0.11)
    E2(etcd
    Node 2
    10.10.0.12)
    E3(etcd
    Node 3
    10.10.0.13)

    WLB(MetalLB
    10.10.0.200)
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&quot;prerequisites&quot;&gt;Prerequisites&lt;/h2&gt;

&lt;p&gt;To deploy the cluster, we will need:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A local computer with &lt;strong&gt;Multipass&lt;/strong&gt; installed. For macOS, it can be installed using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;brew install multipass&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;kubectl&lt;/strong&gt; — The Kubernetes command-line tool, &lt;a href=&quot;https://andygol-k8s.netlify.app/docs/tasks/tools/#kubectl&quot;&gt;kubectl&lt;/a&gt;, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;brew install kubectl&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;Basic knowledge of &lt;a href=&quot;https://andygol-k8s.netlify.app/docs/concepts/overview/&quot;&gt;Kubernetes&lt;/a&gt; and &lt;a href=&quot;https://wikipedia.org/wiki/Linux&quot;&gt;Linux&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;yq&lt;/strong&gt; (&lt;a href=&quot;https://mikefarah.gitbook.io/yq&quot;&gt;mikefarah’s version&lt;/a&gt;) — a lightweight and portable command-line processor for YAML, JSON, INI, and XML.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;ssh-key-setup&quot;&gt;SSH Key Setup&lt;/h2&gt;

&lt;p&gt;Secure Shell, SSH (English: Secure Shell) is a network protocol that allows remote management of a computer. It encrypts all traffic, including the authentication process and the transmission of passwords and secrets. To access the virtual machines, you need to set up SSH keys. We will use these keys to connect to all created virtual machines and transfer files between them.&lt;/p&gt;

&lt;h3 id=&quot;generating-an-ssh-key&quot;&gt;Generating an SSH Key&lt;/h3&gt;

&lt;p&gt;To generate a key pair (public and private), run the following command:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Create a new pair of SSH keys (if you don&apos;t have one)&lt;/span&gt;
ssh-keygen &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; rsa &lt;span class=&quot;nt&quot;&gt;-b&lt;/span&gt; 4096 &lt;span class=&quot;nt&quot;&gt;-C&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;k8s-cluster&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; ~/.ssh/k8s_cluster_key

&lt;span class=&quot;c&quot;&gt;# View the public key&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; ~/.ssh/k8s_cluster_key.pub
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;updating-cloud-init-configuration&quot;&gt;Updating cloud-init Configuration&lt;/h3&gt;

&lt;p&gt;Use the obtained public key in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-users.yaml&lt;/code&gt; file in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ssh-authorized-keys&lt;/code&gt; section:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;users&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;k8sadmin&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;ssh_authorized_keys&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;c1&quot;&gt;# Replace with your actual public key&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can also use variable substitution at runtime&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;multipass launch &lt;span class=&quot;nt&quot;&gt;--cloud-init&lt;/span&gt; - &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
users:
  - name: username
    sudo: ALL=(ALL) NOPASSWD:ALL
    ssh_authorized_keys:
      - &lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; ~/.ssh/id_rsa.pub &lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;accessing-virtual-machines-via-ssh&quot;&gt;Accessing Virtual Machines via SSH&lt;/h3&gt;

&lt;p&gt;After deployment, we can access the virtual machines using the following commands:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;multipass shell &amp;lt;vm-name&amp;gt;
&lt;span class=&quot;c&quot;&gt;# or&lt;/span&gt;
ssh &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; ~/.ssh/k8s_cluster_key k8sadmin@&amp;lt;vm-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Multipass has a built-in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;shell&lt;/code&gt; command that allows us to connect to any created virtual machine without additional SSH configuration. However, it uses the default user &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ubuntu&lt;/code&gt;. If you want to connect as the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;k8sadmin&lt;/code&gt; user, which we will create using cloud-init, use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ssh -i ~/.ssh/k8s_cluster_key k8sadmin@&amp;lt;vm-ip&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;h2 id=&quot;environment-preparation&quot;&gt;Environment Preparation&lt;/h2&gt;

&lt;p&gt;Create a project directory and the necessary files.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;mkdir &lt;/span&gt;k8s-ha-cluster
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;k8s-ha-cluster
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;creating-scripts-and-configurations&quot;&gt;Creating Scripts and Configurations&lt;/h2&gt;

&lt;h3 id=&quot;virtual-machine-parameters&quot;&gt;Virtual Machine Parameters&lt;/h3&gt;

&lt;p&gt;To deploy the virtual machines, we need the following parameters:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th style=&quot;text-align: left&quot;&gt;Node&lt;/th&gt;
      &lt;th style=&quot;text-align: center&quot;&gt;Quantity&lt;/th&gt;
      &lt;th style=&quot;text-align: center&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cpus&lt;/code&gt;&lt;/th&gt;
      &lt;th style=&quot;text-align: left&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;memory&lt;/code&gt;&lt;/th&gt;
      &lt;th style=&quot;text-align: left&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;disk&lt;/code&gt;&lt;/th&gt;
      &lt;th style=&quot;text-align: left&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;network&lt;/code&gt;&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;etcd&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;3+ or&lt;br /&gt;(2n+1)&lt;sup&gt;&lt;a href=&quot;https://andygol-etcd.netlify.app/docs/v3.5/faq/#what-is-failure-tolerance&quot;&gt;[1]&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;2&lt;/td&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;2G&lt;/td&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;5G&lt;/td&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;name=en0,mode=manual&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;control plane&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;3&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;2&lt;/td&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;2.5G&lt;/td&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;8G&lt;/td&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;name=en0,mode=manual&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;worker nodes&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;3+&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;2&lt;/td&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;2G&lt;/td&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;10G&lt;/td&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;name=en0,mode=manual&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;HAProxy+Keepalived&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;2&lt;/td&gt;
      &lt;td style=&quot;text-align: center&quot;&gt;1&lt;/td&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;1G&lt;/td&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;5G&lt;/td&gt;
      &lt;td style=&quot;text-align: left&quot;&gt;name=en0,mode=manual&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h3 id=&quot;cloud-init-configurations&quot;&gt;Cloud-init Configurations&lt;/h3&gt;

&lt;p&gt;We will use pre-created cloud-init virtual machine configuration snippets to speed up their deployment.&lt;/p&gt;

&lt;h4 id=&quot;user-creation&quot;&gt;User Creation&lt;/h4&gt;

&lt;p&gt;Let’s create the user &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;k8sadmin&lt;/code&gt; settings in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-user.yaml&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c1&quot;&gt;# Create the directory if it doesn&apos;t exist&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;mkdir -p snipets&lt;/span&gt;

&lt;span class=&quot;s&quot;&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; snipets/cloud-init-user.yaml&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;# User settings&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;users&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;k8sadmin&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;sudo&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;ALL=(ALL)&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;NOPASSWD:ALL&quot;&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;groups&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;sudo,users,containerd&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;homedir&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/home/k8sadmin&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;shell&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/bin/bash&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;lock_passwd&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;false&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;ssh_authorized_keys&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;c1&quot;&gt;# Your SSH key&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;$( cat ~/.ssh/k8s_cluster_key.pub )&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We have already discussed the user configuration in the SSH key settings. You can read about other parameters in the &lt;a href=&quot;https://cloudinit.readthedocs.io/en/latest/reference/examples.html#including-users-and-groups&quot;&gt;Including users and groups&lt;/a&gt; section of the cloud-init documentation.&lt;/p&gt;

&lt;h4 id=&quot;base-configuration-for-cluster-nodes&quot;&gt;Base Configuration for Cluster Nodes&lt;/h4&gt;

&lt;p&gt;The base cloud-init configuration for our cluster nodes will be in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-base.yaml&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c1&quot;&gt;#cloud-config&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;# Add user settings from snipets/cloud-init-users.yaml here&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;# Commands that run early in the boot stage to prepare the GPG key&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;bootcmd&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;mkdir -p /etc/apt/keyrings&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;# Apt repository configuration&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apt&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;sources&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;kubernetes.list&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;source&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;deb&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;[signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg]&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;https://pkgs.k8s.io/core:/stable:/v1.34/deb/&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;/&quot;&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;# System update&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;package_update&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;package_upgrade&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;To install the components necessary for the cluster operation, we need to &lt;a href=&quot;https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#k8s-install-0&quot;&gt;use the Kubernetes project repository&lt;/a&gt;. In the &lt;a href=&quot;https://cloudinit.readthedocs.io/en/latest/reference/yaml_examples/boot_cmds.html#run-commands-in-early-boot&quot;&gt; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bootcmd&lt;/code&gt; section&lt;/a&gt;, which is very similar to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;runcmd&lt;/code&gt;, but the commands from which are executed at the very beginning of the boot process, we obtain the GPG key and save it to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/apt/keyrings&lt;/code&gt;, and then add the Kubernetes repository to the &lt;a href=&quot;https://cloudinit.readthedocs.io/en/latest/reference/examples.html#additional-apt-configuration-and-repositories&quot;&gt;apt sources list&lt;/a&gt; in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;apt&lt;/code&gt; section. The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;package_update&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;package_upgrade&lt;/code&gt; lines ensure that the latest system updates are obtained during boot, equivalent to running &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;apt-get update&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;apt-get upgrade&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;packages&lt;/code&gt; section, we will specify all the necessary packages to be installed on each virtual machine of the cluster.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c1&quot;&gt;# Install basic packages&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;packages&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;apt-transport-https&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ca-certificates&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;curl&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;gnupg&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;lsb-release&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;containerd&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;kubelet&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;kubeadm&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;kubectl&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# Except for worker nodes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In addition to system service packages, we also &lt;a href=&quot;https://cloudinit.readthedocs.io/en/latest/reference/examples.html#install-arbitrary-packages&quot;&gt;install&lt;/a&gt; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;containerd&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubelet&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm&lt;/code&gt;, and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl&lt;/code&gt;, which are the core components for the operation of the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;containerd&lt;/code&gt; is the container runtime engine used by Kubernetes to run containerized applications. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubelet&lt;/code&gt; is the agent that runs on each node in the Kubernetes cluster and is responsible for running containers in pods. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm&lt;/code&gt; is the tool for quickly deploying the Kubernetes cluster, and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl&lt;/code&gt; is the command-line tool for interacting with the Kubernetes cluster.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c1&quot;&gt;# User configuration&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;users&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;k8sadmin&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;groups&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;sudo,users,containerd&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Our user is a member of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;containerd&lt;/code&gt; group, which we will use to access the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/run/containerd/containerd.sock&lt;/code&gt; socket, enabling the use of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;crictl&lt;/code&gt; without the need to use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sudo&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;write_files&lt;/code&gt; instruction in cloud-init allows &lt;a href=&quot;https://cloudinit.readthedocs.io/en/latest/reference/examples.html#writing-out-arbitrary-files&quot;&gt;creating files with specified content&lt;/a&gt; during the initialization of the virtual machine. We will use it to create files for configuring kernel modules for Kubernetes operation, enabling IP forwarding, creating settings for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;crictl&lt;/code&gt;, and a script that we will use to initialize and start &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubelet&lt;/code&gt;. (See the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;write_files&lt;/code&gt; section in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cloud-init-base.yaml&lt;/code&gt; file)&lt;/p&gt;

&lt;p&gt;After we have created all the necessary files using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;write_files&lt;/code&gt;, we can &lt;a href=&quot;https://cloudinit.readthedocs.io/en/latest/reference/examples.html#run-commands-on-first-boot&quot;&gt;execute the specified commands&lt;/a&gt; for system configuration in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;runcmd&lt;/code&gt; section. Here we can specify the commands that we would have to execute manually after the first system boot. In our case, we will freez the versions of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubelet&lt;/code&gt;, and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl&lt;/code&gt;, disable swap, create a configuration file for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;containerd&lt;/code&gt;, and start it, as well as enable and start the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubelet&lt;/code&gt; service. (See the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;runcmd&lt;/code&gt; section in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cloud-init-base.yaml&lt;/code&gt; file)&lt;/p&gt;

&lt;p&gt;Create two files:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a id=&quot;cloud-init-config-yaml&quot;&gt;&lt;/a&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-config.yaml&lt;/code&gt; — to assign a static IP address to our virtual machines&lt;/p&gt;

    &lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; snipets/cloud-init-config.yaml&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#cloud-config&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;timezone&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Europe/Kyiv&lt;/span&gt;

&lt;span class=&quot;na&quot;&gt;write_files&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# Assigning a static IP address&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# Variant with alias for bridge101 for the second network interface&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/netplan/60-static-ip.yaml&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;# Explicitly specify the value type&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;# permissions: !!str &quot;0755&quot; # https://github.com/canonical/multipass/issues/4176&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;permissions&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;!!str&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;0600&apos;&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;network:&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;version: 2&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;ethernets:&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;enp0s2:&lt;/span&gt;
            &lt;span class=&quot;s&quot;&gt;# Here you specify the IP address of the specific virtual machine&lt;/span&gt;
            &lt;span class=&quot;s&quot;&gt;addresses:&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;# - 10.10.0.24/24&lt;/span&gt;
            &lt;span class=&quot;s&quot;&gt;routes:&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;- to: 10.10.0.0/24&lt;/span&gt;
                &lt;span class=&quot;s&quot;&gt;scope: link&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;runcmd&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# Applying settings for using a static IP address&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;netplan apply&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# Setting vim as the default editor (optional)&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;update-alternatives --set editor /usr/bin/vim.basic&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;☝️ You can also specify that you want to use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;vim&lt;/code&gt; as your default editor here. If you prefer &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;nano&lt;/code&gt;, comment out or remove the line &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;- update-alternatives --set editor /usr/bin/vim.basic&lt;/code&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;and, the file &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-base.yaml&lt;/code&gt; with basic settings for the cluster virtual machines&lt;/p&gt;

    &lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;cat &amp;lt;&amp;lt; &apos;EOF&apos; &amp;gt; snipets/cloud-init-base.yaml&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;# Runs at an early boot stage to prepare the GPG key&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;bootcmd&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;mkdir -p /etc/apt/keyrings&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;# Apt repository configuration&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;apt&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;sources&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;kubernetes.list&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;source&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;deb&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;[signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg]&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;https://pkgs.k8s.io/core:/stable:/v1.34/deb/&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;/&quot;&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;# System update&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;package_update&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;package_upgrade&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;# Install basic packages&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;packages&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;apt-transport-https&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ca-certificates&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;curl&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;gnupg&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;lsb-release&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;containerd&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;kubelet&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;kubeadm&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;kubectl&lt;/span&gt;

&lt;span class=&quot;na&quot;&gt;write_files&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# Kernel module settings for Kubernetes&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/modules-load.d/k8s.conf&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;# permissions: !!str &apos;0644&apos;&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;overlay&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;br_netfilter&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# Enabling IPv4 packet forwarding&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes/#prerequisite-ipv4-forwarding-optional&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/sysctl.d/k8s.conf&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;# permissions: !!str &apos;0644&apos;&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;net.bridge.bridge-nf-call-iptables  = 1&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;net.bridge.bridge-nf-call-ip6tables = 1&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;net.ipv4.ip_forward                 = 1&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# Config for crictl&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# https://github.com/containerd/containerd/blob/main/docs/cri/crictl.md#install-crictl&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/crictl.yaml&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;# permissions: !!str &apos;0644&apos;&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;runtime-endpoint: unix:///run/containerd/containerd.sock&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;image-endpoint: unix:///run/containerd/containerd.sock&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;timeout: 10&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;debug: false&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# Setting group for containerd socket&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/systemd/system/containerd.service.d/override.conf&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;[Service]&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;ExecStartPost=/bin/sh -c &quot;chgrp containerd /run/containerd/containerd.sock &amp;amp;&amp;amp; chmod 660 /run/containerd/containerd.sock&quot;&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# Script to start kubelet&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/usr/local/bin/kubelet-start.sh&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;permissions&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;!!str&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;0755&quot;&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;#!/bin/bash&lt;/span&gt;

      &lt;span class=&quot;s&quot;&gt;echo &quot;Starting kubelet service and waiting for readiness (timeout 300s)...&quot;&lt;/span&gt;

      &lt;span class=&quot;s&quot;&gt;# Enable and start the service&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;sudo systemctl enable --now kubelet&lt;/span&gt;

      &lt;span class=&quot;s&quot;&gt;WAIT_LIMIT=300       # Maximum wait time in seconds&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;ELAPSED_TIME=0       # Time already passed&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;SLEEP_INTERVAL=1     # Initial interval (1 second)&lt;/span&gt;

      &lt;span class=&quot;s&quot;&gt;while ! systemctl is-active --quiet kubelet; do&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;if [ &quot;$ELAPSED_TIME&quot; -ge &quot;$WAIT_LIMIT&quot; ]; then&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;echo &quot;--------------------------------------------------------------&quot; &amp;gt;&amp;amp;2&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;echo &quot;BROKEN-DOWN: kubelet did not start within $WAIT_LIMIT seconds.&quot; &amp;gt;&amp;amp;2&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;echo &quot;Last error logs:&quot;                                               &amp;gt;&amp;amp;2&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;journalctl -u kubelet -n 20 --no-pager                                &amp;gt;&amp;amp;2&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;echo &quot;--------------------------------------------------------------&quot; &amp;gt;&amp;amp;2&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;exit 1&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;fi&lt;/span&gt;

          &lt;span class=&quot;s&quot;&gt;echo &quot;Waiting for kubelet... (elapsed $ELAPSED_TIME/$WAIT_LIMIT sec,&quot;&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;echo &quot;next attempt in ${SLEEP_INTERVAL}sec)&quot;&lt;/span&gt;

          &lt;span class=&quot;s&quot;&gt;sleep $SLEEP_INTERVAL&lt;/span&gt;

          &lt;span class=&quot;s&quot;&gt;# Update counters&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;ELAPSED_TIME=$((ELAPSED_TIME + SLEEP_INTERVAL))&lt;/span&gt;

          &lt;span class=&quot;s&quot;&gt;# Double the interval for the next time (progressive)&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;# But do not make the interval longer than 20 seconds, to not &quot;oversleep&quot; readiness&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;SLEEP_INTERVAL=$((SLEEP_INTERVAL * 2))&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;if [ &quot;$SLEEP_INTERVAL&quot; -gt 20 ]; then&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;SLEEP_INTERVAL=20&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;fi&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;done&lt;/span&gt;

      &lt;span class=&quot;s&quot;&gt;echo &quot;Kubelet successfully started in $ELAPSED_TIME seconds. Continuing...&quot;&lt;/span&gt;

&lt;span class=&quot;na&quot;&gt;runcmd&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# Freezing the versions of Kubernetes packages&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;apt-mark hold kubelet kubeadm kubectl&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# Loading kernel modules&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;modprobe overlay&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;modprobe br_netfilter&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# Applying sysctl parameters&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;sysctl --system&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# Configuring containerd&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;#&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# Setting the systemd cgroup driver&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes/#containerd-systemd&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# https://github.com/containerd/containerd/blob/main/docs/cri/config.md#cgroup-drivercrictl pull&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;#&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# Overriding the pause image&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# https://andygol-k8s.netlify.app/docs/setup/production-environment/container-runtimes/#override-pause-image-containerd&lt;/span&gt;

  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;mkdir -p /etc/containerd&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;containerd config default | tee /etc/containerd/config.toml&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;sed -i &apos;s|SystemdCgroup = &lt;/span&gt;&lt;span class=&quot;no&quot;&gt;false&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;|SystemdCgroup = &lt;/span&gt;&lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;|g; s|sandbox_image = &quot;registry.k8s.io/pause.*&quot;|sandbox_image = &quot;registry.k8s.io/pause:3.10.1&quot;|&apos; /etc/containerd/config.toml&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;systemctl&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;daemon-reload&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;systemctl&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;restart&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;containerd&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;systemctl&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;enable&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;containerd&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# Disabling swap&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/swap-memory-management/#swap-and-control-plane-nodes&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#swap-configuration&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;swapoff -a&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;sed -i &apos;/ swap / s/^/#/&apos; /etc/fstab&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# Enabling kubelet&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/usr/local/bin/kubelet-start.sh&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;The default access rights for files created by cloud-init are &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;0644&lt;/code&gt;. If you want to specify them explicitly, uncomment the corresponding lines &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;# permissions: !!str ‘0644’&lt;/code&gt;. We specify the data type explicitly (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;!!str ‘0644’&lt;/code&gt;) due to the issue described in ticket &lt;a href=&quot;https://github.com/canonical/multipass/issues/4176&quot;&gt;# 4176&lt;/a&gt;.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;configuration-for-etcd-nodes&quot;&gt;Configuration for etcd nodes&lt;/h4&gt;

&lt;p&gt;For etcd nodes, we will use the base configuration with additional settings specific to etcd. Create the file &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-etcd.yaml&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c1&quot;&gt;# Extend cloud-init-base.yaml with the following settings&lt;/span&gt;

&lt;span class=&quot;s&quot;&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; snipets/cloud-init-etcd.yaml&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;write_files&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/#setup-up-the-cluster&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/systemd/system/kubelet.service.d/kubelet.conf&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;apiVersion: kubelet.config.k8s.io/v1beta1&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;kind: KubeletConfiguration&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;authentication:&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;anonymous:&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;enabled: false&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;webhook:&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;enabled: false&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;authorization:&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;mode: AlwaysAllow&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;cgroupDriver: systemd&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;address: 127.0.0.1&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;containerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;staticPodPath: /etc/kubernetes/manifests&lt;/span&gt;

  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;[Service]&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;ExecStart=&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;ExecStart=/usr/bin/kubelet --config=/etc/systemd/system/kubelet.service.d/kubelet.conf&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;Restart=always&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We will use the basic settings we created earlier and add etcd-specific settings in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;write_files&lt;/code&gt; section, where we create a configuration file for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubelet&lt;/code&gt; that configures it to work with etcd, as described in the section “Configuring a highly available etcd cluster with kubeadm” section of the Kubernetes documentation.&lt;/p&gt;

&lt;h4 id=&quot;configuring-the-haproxy-balancer-node&quot;&gt;Configuring the HAProxy balancer node&lt;/h4&gt;

&lt;p&gt;For the control panel traffic load balancer nodes, we will create the file &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-haproxy.yaml&lt;/code&gt;, which will install the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;haproxy&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;keepalived&lt;/code&gt; packages from the standard repository and add the settings. To create a virtual machine for HAProxy, we will use the basic settings from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;configs/cloud-init-user.yaml&lt;/code&gt; and extend them with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;write_files:&lt;/code&gt; instructions to create the balancer configuration files — &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/haproxy/haproxy.cfg&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/keepalived/keepalived.conf&lt;/code&gt;. (See the section “&lt;a href=&quot;#configuring-the-load-balancer-for-control-plane-nodes-haproxykeepalived&quot;&gt;Configuring the load balancer for control plane nodes (HAProxy+Keepalived)&lt;/a&gt;.”)&lt;/p&gt;

&lt;h3 id=&quot;choosing-a-network-topology&quot;&gt;Choosing a network topology&lt;/h3&gt;

&lt;p&gt;For this demonstration, we will choose a compact network. (10.10.0.0/24)&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-none&quot;&gt;10.10.0.0/24      - Main subnet (256 addresses)
├─ 10.10.0.1      - Gateway/Bridge (on the host machine)
├─ 10.10.0.10-19  - etcd (3+ nodes)
├─ 10.10.0.20-29  - Control Plane (3+ masters)
├─ 10.10.0.30-50  - Workers (up to 20 workers)
└─ 10.10.0.100    - HAProxy/Keepalived (two nodes 10.10.0.101/10.10.0.102)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To assign static addresses to Multipass virtual machines, we will use one of the options described in the article “&lt;a href=&quot;https://blog.andygol.co.ua/en/2025/12/26/static-ip-for-multipass-vm/#adding-static-ip-address-to-the-second-network-interface-of-the-virtual-machine&quot;&gt;Adding a static IP address to Multipass virtual machines on macOS&lt;/a&gt;”.&lt;/p&gt;

&lt;p&gt;Add the appropriate section to the cloud-init file with the static network address configuration on the second network interface (see &lt;a href=&quot;#cloud-init-config-yaml&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-config.yaml&lt;/code&gt;&lt;/a&gt;).&lt;/p&gt;

&lt;h2 id=&quot;creating-an-etcd-cluster&quot;&gt;Creating an etcd cluster&lt;/h2&gt;

&lt;p&gt;Let’s start by creating an etcd cluster, where our highly available Kubernetes cluster will store the configuration and desired state of system objects.&lt;/p&gt;

&lt;h3 id=&quot;deploying-the-first-node-of-the-etcd-cluster&quot;&gt;Deploying the first node of the etcd cluster&lt;/h3&gt;

&lt;p&gt;Let’s start deploying our cluster by creating the first etcd node.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;VM_IP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;10.10.0.11/24&quot;&lt;/span&gt;

multipass launch &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; ext-etcd-1 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cpus&lt;/span&gt; 2 &lt;span class=&quot;nt&quot;&gt;--memory&lt;/span&gt; 2G &lt;span class=&quot;nt&quot;&gt;--disk&lt;/span&gt; 5G &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--network&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;en0,mode&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;manual &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cloud-init&lt;/span&gt; &amp;lt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt; yq eval-all &lt;span class=&quot;s1&quot;&gt;&apos;
      # Merge all files into a single object
      . as $item ireduce ({}; . *+ $item) |

      # Remove kubectl from the list of packages
      del(.packages[] | select(. == “kubectl”)) |

      # Update the network configuration
      with(.write_files[] | select(.path == “/etc/netplan/60-static-ip.yaml”);
        .content |= (
          from_yaml |
          .network.ethernets.enp0s2.addresses += [strenv(VM_IP)] |
          to_yaml
        )
      ) &apos;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-config.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-user.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-base.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-etcd.yaml &lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The command &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;multipass launch --name ext-etcd-1&lt;/code&gt; will start deploying a virtual machine with the name specified in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--name/-n&lt;/code&gt; parameter, in this case &lt;strong&gt;ext-etcd-1&lt;/strong&gt;; The parameters &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--cpus 2 --memory 2G --disk 5G&lt;/code&gt; specify the number of processor cores, memory, and disk space, respectively; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--network name=en0,mode=manual&lt;/code&gt; will create another network interface for the virtual machine, whose IP address will be assigned via the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;VM_IP&lt;/code&gt; variable.&lt;/p&gt;

&lt;p&gt;Since etcd actively writes data to disk, database performance directly depends on the performance of the disk subsystem. For production use, SSD storage systems are strongly recommended. The minimum disk space should be at least 2 GB by default. Accordingly, to avoid placing data in swap, the amount of RAM should cover this quota. 8 GB is the recommended maximum for typical deployments. Requirements for etcd machines for small industrial clusters: 2 vCPUs, 8 GB of memory, and 50-80 GB SSD. (See &lt;a href=&quot;https://andygol-etcd.netlify.app/docs/v3.5/op-guide/hardware/#small-cluster&quot;&gt;https://andygol-etcd.netlify.app/docs/v3.5/op-guide/hardware/#small-cluster&lt;/a&gt;, &lt;a href=&quot;https://andygol-etcd.netlify.app/docs/v3.5/faq/#system-requirements&quot;&gt;https://andygol-etcd.netlify.app/docs/v3.5/faq/#system-requirements&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;We will merge cloud-init parameters on the fly by combining our template files &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-config.yaml&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-user.yaml&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-base.yaml&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-etcd.yaml&lt;/code&gt; using &lt;strong&gt;yq&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you did not create a temporary virtual machine to initialize &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bridge101&lt;/code&gt; and did not &lt;a href=&quot;https://blog.andygol.co.ua/en/2025/12/26/static-ip-for-multipass-vm/#multipass-bridge-for-the-second-network-interface&quot;&gt;add an alias for it&lt;/a&gt;, after deploying the virtual machine, it’s time to do the following. Run the following on your host:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Define the name of the bridge. Most likely, the name will be bridge101.&lt;/span&gt;
ifconfig &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-B&lt;/span&gt; 20 “member: vmenet” | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; “bridge” | &lt;span class=&quot;nb&quot;&gt;awk&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-F&lt;/span&gt;: ‘&lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;print &lt;span class=&quot;nv&quot;&gt;$1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;’ | &lt;span class=&quot;nb&quot;&gt;tail&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; 1

&lt;span class=&quot;c&quot;&gt;# Add the address&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;ifconfig bridge101 10.10.0.1/24 &lt;span class=&quot;nb&quot;&gt;alias&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Check that it has been added&lt;/span&gt;
ifconfig bridge101 | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; “inet ”
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If you did this 👆 after creating a temporary virtual machine, you can now delete it&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;multipass delete &amp;lt;temp-vm&amp;gt; &lt;span class=&quot;nt&quot;&gt;--purge&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;configuring-the-first-etcd-node&quot;&gt;Configuring the first etcd node&lt;/h4&gt;

&lt;p&gt;Let’s log into our node using SSH.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ssh &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; ~/.ssh/k8s_cluster_key k8sadmin@10.10.0.11
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Since we don’t have any CA certificates yet, we need to generate them.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm init phase certs etcd-ca
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We will get two files, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ca.crt&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ca.key&lt;/code&gt;, in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/etcd/&lt;/code&gt; folder.&lt;/p&gt;

&lt;p&gt;&lt;a id=&quot;etcd-kubeadmcfg-yaml&quot;&gt;&lt;/a&gt;Now let’s create a configuration file for kubeadm &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadmcfg.yaml&lt;/code&gt; using the appropriate values in the variables &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ETCD_HOST&lt;/code&gt; (IP address of the virtual machine) and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ETCD_NAME&lt;/code&gt; (its short name). Note the value of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ETCD_INITIAL_CLUSTER_STATE&lt;/code&gt; variable, which indicates that we are creating a new etcd cluster. We will add other nodes to it later.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_HOST&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;hostname&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-I&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;awk&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;{print $2}&apos;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;ETCD_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;hostname&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-a&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;ETCD_INITIAL_CLUSTER&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_INITIAL_CLUSTER&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:-&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_NAME&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;=https://&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;:2380&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;ETCD_INITIAL_CLUSTER_STATE&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_INITIAL_CLUSTER_STATE&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:-&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;new&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &amp;gt; &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;/kubeadmcfg.yaml
---
apiVersion: &quot;kubeadm.k8s.io/v1beta4&quot;
kind: InitConfiguration
nodeRegistration:
    name: &lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_NAME&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
localAPIEndpoint:
    advertiseAddress: &lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
---
apiVersion: &quot;kubeadm.k8s.io/v1beta4&quot;
kind: ClusterConfiguration
etcd:
    local:
        serverCertSANs:
        - &quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;
        peerCertSANs:
        - &quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;
        extraArgs:
        - name: initial-cluster
          value: &lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_INITIAL_CLUSTER&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
        - name: initial-cluster-state
          value: &lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_INITIAL_CLUSTER_STATE&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
        - name: name
          value: &lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_NAME&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
        - name: listen-peer-urls
          value: https://&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;:2380
        - name: listen-client-urls
          value: https://&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;:2379,https://127.0.0.1:2379
        - name: advertise-client-urls
          value: https://&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;:2379
        - name: initial-advertise-peer-urls
          value: https://&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;:2380
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Using the created &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadmcfg.yaml&lt;/code&gt; file, which we placed in the home directory of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;k8sadmin&lt;/code&gt; user, we will generate etcd certificates and create a static pod manifest for the etcd cluster node.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# 1. Certificate issuance&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm init phase certs etcd-server &lt;span class=&quot;nt&quot;&gt;--config&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/kubeadmcfg.yaml
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm init phase certs etcd-peer &lt;span class=&quot;nt&quot;&gt;--config&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/kubeadmcfg.yaml
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm init phase certs etcd-healthcheck-client &lt;span class=&quot;nt&quot;&gt;--config&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/kubeadmcfg.yaml
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm init phase certs apiserver-etcd-client &lt;span class=&quot;nt&quot;&gt;--config&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/kubeadmcfg.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now we should have the following keys and certificates available&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-none&quot;&gt;/home/k8sadmin
└── kubeadmcfg.yaml
---
/etc/kubernetes/pki
├── apiserver-etcd-client.crt
├── apiserver-etcd-client.key
└── etcd
    ├── ca.crt
    ├── ca.key
    ├── healthcheck-client.crt
    ├── healthcheck-client.key
    ├── peer.crt
    ├── peer.key
    ├── server.crt
    └── server.key
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;After creating the appropriate certificates, it’s time to create a static pod manifest. As a result, we should have a file &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/manifests/etcd.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# 2. Creating a static pod manifest&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm init phase etcd &lt;span class=&quot;nb&quot;&gt;local&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--config&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/kubeadmcfg.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Once we have the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/manifests/etcd.yaml&lt;/code&gt; manifest, the node’s &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubelet&lt;/code&gt; should pick it up, download the container image, and start the pod with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;etcd&lt;/code&gt;, after which the first node of our cluster should respond to health checks.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;crictl &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;crictl ps &lt;span class=&quot;nt&quot;&gt;--label&lt;/span&gt; io.kubernetes.container.name&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;etcd &lt;span class=&quot;nt&quot;&gt;--quiet&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt; etcdctl &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--cert&lt;/span&gt; /etc/kubernetes/pki/etcd/peer.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--key&lt;/span&gt; /etc/kubernetes/pki/etcd/peer.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--cacert&lt;/span&gt; /etc/kubernetes/pki/etcd/ca.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--endpoints&lt;/span&gt; https://10.10.0.11:2379 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   endpoint health &lt;span class=&quot;nt&quot;&gt;-w&lt;/span&gt; table
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;+-------------------------+--------+------------+-------+
|        ENDPOINT         | HEALTH |    TOOK    | ERROR |
+-------------------------+--------+------------+-------+
| https://10.10.0.11:2379 |   true | 7.777048ms |       |
+-------------------------+--------+------------+-------+
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;deploying-subsequent-etcd-cluster-nodes&quot;&gt;Deploying subsequent etcd cluster nodes&lt;/h3&gt;

&lt;p&gt;To deploy the next etcd cluster nodes: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ext-etcd-2&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ext-etcd-3&lt;/code&gt;, and so on (as needed), change the value of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;VM_IP&lt;/code&gt; to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;“10.10.0.12/24”&lt;/code&gt; and the virtual machine name in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--name&lt;/code&gt; parameter to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ext-etcd-2&lt;/code&gt;, respectively.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;VM_IP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;10.10.0.12/24&quot;&lt;/span&gt;

multipass launch &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; ext-etcd-2 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cpus&lt;/span&gt; 2 &lt;span class=&quot;nt&quot;&gt;--memory&lt;/span&gt; 2G &lt;span class=&quot;nt&quot;&gt;--disk&lt;/span&gt; 5G &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--network&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;en0,mode&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;manual &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cloud-init&lt;/span&gt; &amp;lt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt; yq eval-all &lt;span class=&quot;s1&quot;&gt;&apos;
      # Merge all files into a single object
      . as $item ireduce ({}; . *+ $item) |

      # Remove kubectl from the package list
      del(.packages[] | select(. == “kubectl”)) |

      # Update network configuration
      with(.write_files[] | select(.path == “/etc/netplan/60-static-ip.yaml”);
        .content |= (
          from_yaml |
          .network.ethernets.enp0s2.addresses += [strenv(VM_IP)] |
          to_yaml
        )
      ) &apos;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-config.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-user.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-base.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-etcd.yaml &lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;After completing the deployment of the node, let’s check if we have SSH access to it:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ssh &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; ~/.ssh/k8s_cluster_key k8sadmin@10.10.0.12 &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;ls&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-la&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;configuring-etcd-nodes-and-joining-them-to-the-cluster&quot;&gt;Configuring etcd nodes and joining them to the cluster&lt;/h4&gt;

&lt;p&gt;On our etcd node &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ext-etcd-1&lt;/code&gt;, which is already running, we will execute the following command to get instructions for joining the node &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ext-etcd-2&lt;/code&gt; to the etcd cluster. After the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;member add&lt;/code&gt; command, we will specify the name of the node to be joined and the path to it in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--peer-urls&lt;/code&gt; parameter.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;crictl &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;crictl ps &lt;span class=&quot;nt&quot;&gt;--label&lt;/span&gt; io.kubernetes.container.name&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;etcd &lt;span class=&quot;nt&quot;&gt;--quiet&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt; etcdctl &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--cert&lt;/span&gt; /etc/kubernetes/pki/etcd/peer.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--key&lt;/span&gt; /etc/kubernetes/pki/etcd/peer.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--cacert&lt;/span&gt; /etc/kubernetes/pki/etcd/ca.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--endpoints&lt;/span&gt; https://10.10.0.11:2379 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   member add ext-etcd-2 &lt;span class=&quot;nt&quot;&gt;--peer-urls&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;https://10.10.0.12:2380
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;etcdctl&lt;/code&gt; will register a new member of the etcd cluster, and in response we will receive its ID and the string &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;“ext-etcd-1=https://10.10.0.11:2380,ext-etcd-2=https://10.10.0.12:2380”&lt;/code&gt; with a complete list of cluster nodes (including the new member).&lt;/p&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;Member e3e9330902f761c3 added to cluster 3f0c3972eda275cb

ETCD_NAME=&quot;ext-etcd-2&quot;
ETCD_INITIAL_CLUSTER=&quot;ext-etcd-1=https://10.10.0.11:2380,ext-etcd-2=https://10.10.0.12:2380&quot;
ETCD_INITIAL_ADVERTISE_PEER_URLS=&quot;https://10.10.0.12:2380&quot;
ETCD_INITIAL_CLUSTER_STATE=&quot;existing&quot;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;creating-kubeadmcfgyaml&quot;&gt;Creating kubeadmcfg.yaml&lt;/h4&gt;

&lt;p&gt;&lt;a href=&quot;#etcd-kubeadmcfg-yaml&quot;&gt;Create the configuration file &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~/kubeadmcfg.yaml&lt;/code&gt;&lt;/a&gt; on the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ext-etcd-2&lt;/code&gt; node, replacing the variable values with those you just obtained after executing the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;… member add …&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;The next mandatory step is to copy the CA files from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ext-etcd-1&lt;/code&gt; to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ext-etcd-2&lt;/code&gt;. For convenience, the steps for transferring CA files have been combined into a script that you need to create on your host machine.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &amp;lt;&amp;lt; ‘EOF’ &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; copy-etcd-ca.sh
&lt;span class=&quot;c&quot;&gt;#!/bin/bash&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# --- Default settings (change as needed) ---&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;DEFAULT_KEY&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;~/.ssh/k8s_cluster_key&quot;&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;DEFAULT_SRC&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;10.10.0.11&quot;&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;DEFAULT_DEST&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;“10.10.0.12”
&lt;span class=&quot;nv&quot;&gt;USER&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;k8sadmin&quot;&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;CERT_PATH&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;/etc/kubernetes/pki/etcd&quot;&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# --- Assigning arguments ---&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# $1 - first argument (source host), $2 - second (destination host), $3 - third (path to key)&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;SRC_HOST&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:-&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$DEFAULT_SRC&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;DEST_HOST&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:-&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$DEFAULT_DEST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;KEY&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:-&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$DEFAULT_KEY&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; “Parameters used:”
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; “  Source:      &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;SRC_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;”
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; “  Destination: &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;DEST_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;”
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; “  SSH Key:     &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;KEY&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;”
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; “---------------------------------------”

&lt;span class=&quot;c&quot;&gt;# 1. Preparing files on the source&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; “[1/4] Preparing files on &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;SRC_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;...”
ssh &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; “&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;KEY&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;” “&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;USER&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;@&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;SRC_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;” “sudo &lt;span class=&quot;nb&quot;&gt;cp&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$CERT_PATH&lt;/span&gt;/ca.&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt; /tmp/ &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sudo chown&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;USER&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;:&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;USER&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; /tmp/ca.&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;” &lt;span class=&quot;o&quot;&gt;||&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;exit &lt;/span&gt;1

&lt;span class=&quot;c&quot;&gt;# 2. Transferring files between hosts&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; “[2/4] Copying from &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;SRC_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; to &lt;span class=&quot;nv&quot;&gt;$DEST_HOST&lt;/span&gt;...”
scp &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; “&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;KEY&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;” “&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;USER&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;@&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;SRC_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;:/tmp/ca.&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;” “&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;USER&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;@&lt;span class=&quot;nv&quot;&gt;$DEST_HOST&lt;/span&gt;:/tmp/” &lt;span class=&quot;o&quot;&gt;||&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;exit &lt;/span&gt;1

&lt;span class=&quot;c&quot;&gt;# 3. Placing files on the target host&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; “[3/4] Placing files on &lt;span class=&quot;nv&quot;&gt;$DEST_HOST&lt;/span&gt;...”
ssh &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; “&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;KEY&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;” “&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;USER&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;@&lt;span class=&quot;nv&quot;&gt;$DEST_HOST&lt;/span&gt;” &lt;span class=&quot;s2&quot;&gt;&quot;sudo mkdir -p &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$CERT_PATH&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt; &amp;amp;&amp;amp; sudo mv /tmp/ca.* &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$CERT_PATH&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;/ &amp;amp;&amp;amp; sudo chown root:root &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$CERT_PATH&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;/ca.* &amp;amp;&amp;amp; sudo chmod 600 &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$CERT_PATH&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;/ca.key&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;||&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;exit &lt;/span&gt;1

&lt;span class=&quot;c&quot;&gt;# 4. Cleanup&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; “[4/4] Deleting temporary files...”
ssh &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; “&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;KEY&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;” “&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;USER&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;@&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;SRC_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;” “rm /tmp/ca.&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;”

&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; “ca.crt and ca.key files successfully transferred from &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;SRC_HOST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; to &lt;span class=&quot;nv&quot;&gt;$DEST_HOST&lt;/span&gt;”
EOF

&lt;span class=&quot;nb&quot;&gt;chmod&lt;/span&gt; +x copy-etcd-ca.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Transfer the CA files from host &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.11&lt;/code&gt; to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.12&lt;/code&gt; using the command (specify your parameters if necessary):&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;./сopy-etcd-ca.sh 10.10.0.11 10.10.0.12
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now, let’s generate key and certificate files using the created &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadmcfg.yaml&lt;/code&gt; file on the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ext-etcd-2&lt;/code&gt; node.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# 1. Certificate issuance&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm init phase certs etcd-server &lt;span class=&quot;nt&quot;&gt;--config&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/kubeadmcfg.yaml
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm init phase certs etcd-peer &lt;span class=&quot;nt&quot;&gt;--config&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/kubeadmcfg.yaml
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm init phase certs etcd-healthcheck-client &lt;span class=&quot;nt&quot;&gt;--config&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/kubeadmcfg.yaml
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm init phase certs apiserver-etcd-client &lt;span class=&quot;nt&quot;&gt;--config&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/kubeadmcfg.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;After creating the necessary key files and certificates for the second etcd node, delete the file with the CA private key &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/etcd/ca.key&lt;/code&gt;, as it is no longer needed here.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# 2. Delete /etc/kubernetes/pki/etcd/ca.key&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; /etc/kubernetes/pki/etcd/ca.key
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now that we have the necessary certificates in place, let’s create a manifest for deploying a static pod.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm init phase etcd &lt;span class=&quot;nb&quot;&gt;local&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--config&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/kubeadmcfg.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In a minute or two, we will review the list of nodes in our etcd cluster.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;crictl &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;crictl ps &lt;span class=&quot;nt&quot;&gt;--label&lt;/span&gt; io.kubernetes.container.name&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;etcd &lt;span class=&quot;nt&quot;&gt;--quiet&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt; etcdctl &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cert&lt;/span&gt; /etc/kubernetes/pki/etcd/peer.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--key&lt;/span&gt; /etc/kubernetes/pki/etcd/peer.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cacert&lt;/span&gt; /etc/kubernetes/pki/etcd/ca.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--endpoints&lt;/span&gt; https://10.10.0.11:2379  member list &lt;span class=&quot;nt&quot;&gt;-w&lt;/span&gt; table
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;+------------------+---------+------------+-------------------------+-------------------------+------------+
|        ID        | STATUS  |    NAME    |       PEER ADDRS        |      CLIENT ADDRS       | IS LEARNER |
+------------------+---------+------------+-------------------------+-------------------------+------------+
| 86041dd24c0806ff | started | ext-etcd-1 | https://10.10.0.11:2380 | https://10.10.0.11:2379 |      false |
| e3e9330902f761c3 | started | ext-etcd-2 | https://10.10.0.12:2380 | https://10.10.0.12:2379 |      false |
+------------------+---------+------------+-------------------------+-------------------------+------------+
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;To add the next cluster node, repeat the same steps as for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ext-etcd-2&lt;/code&gt;. Note that the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ETCD_INITIAL_CLUSTER_STATE&lt;/code&gt; variable must have the value &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;“existing”&lt;/code&gt;, and in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ETCD_INITIAL_CLUSTER&lt;/code&gt; variable for creating &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~/kubeadmcfg.yaml&lt;/code&gt; on the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ext-etcd-3&lt;/code&gt; node, you will need to specify all the nodes that are to be members of the cluster. For the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ext-etcd-3&lt;/code&gt; node with IP &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.13&lt;/code&gt;, this variable will look like this:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;ETCD_INITIAL_CLUSTER&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;ext-etcd-1=https://10.10.0.11:2380,ext-etcd-2=https://10.10.0.12:2380,ext-etcd-3=https://10.10.0.13:2380&quot;&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;ETCD_INITIAL_CLUSTER_STATE&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;existing&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;canceling-node-joining-to-the-cluster&quot;&gt;Canceling node joining to the cluster&lt;/h4&gt;

&lt;p&gt;If for any reason you do not want to join a node to the etcd cluster, you need to cancel the previous join command. To do this, you need to remove this node from the list of cluster members by its &lt;strong&gt;ID&lt;/strong&gt;. Even if the node is not yet running, it is already registered in the cluster in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;unstarted&lt;/code&gt; state.&lt;/p&gt;

&lt;p&gt;Find the ID of the desired node. It can be found in the first line of the join command output, or by obtaining a list of all cluster members:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;crictl &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;crictl ps &lt;span class=&quot;nt&quot;&gt;--label&lt;/span&gt; io.kubernetes.container.name&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;etcd &lt;span class=&quot;nt&quot;&gt;--quiet&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt; etcdctl &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--cert&lt;/span&gt; /etc/kubernetes/pki/etcd/peer.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--key&lt;/span&gt; /etc/kubernetes/pki/etcd/peer.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--cacert&lt;/span&gt; /etc/kubernetes/pki/etcd/ca.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--endpoints&lt;/span&gt; https://10.10.0.11:2379 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   member list
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In the output, you will see a line that looks something like this: &lt;br /&gt;
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;62f5145363dbf1b5, unstarted, ext-etcd-2, https://10.10.0.12:2380, ...&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;or in tabular form&lt;/p&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;+------------------+-----------+------------+-------------------------+-------------------------+------------+
|        ID        |  STATUS   |    NAME    |       PEER ADDRS        |      CLIENT ADDRS       | IS LEARNER |
+------------------+-----------+------------+-------------------------+-------------------------+------------+
| 167ef81a292916d4 |   started | ext-etcd-2 | https://10.10.0.12:2380 | https://10.10.0.12:2379 |      false |
| 62f5145363dbf1b5 | unstarted |            | https://10.10.0.14:2380 |                         |      false |
| 86041dd24c0806ff |   started | ext-etcd-1 | https://10.10.0.11:2380 | https://10.10.0.11:2379 |      false |
| ba9a6c0afb514fec |   started | ext-etcd-3 | https://10.10.0.13:2380 | https://10.10.0.13:2379 |      false |
+------------------+-----------+------------+-------------------------+-------------------------+------------+
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Copy the ID (here, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;62f5145363dbf1b5&lt;/code&gt;) and execute this command to delete it:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;etcdctl &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--cert&lt;/span&gt; /etc/kubernetes/pki/etcd/peer.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--key&lt;/span&gt; /etc/kubernetes/pki/etcd/peer.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--cacert&lt;/span&gt; /etc/kubernetes/pki/etcd/ca.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--endpoints&lt;/span&gt; https://10.10.0.11:2379 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   member remove &amp;lt;ID_ВУЗЛА&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;💡 The same applies to removing any node from the list of cluster members. If you want to replace one cluster node with another, first remove the “old” node, then add the new node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why is this important?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you simply leave the node in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;unstarted&lt;/code&gt; state, etcd will constantly try to contact it, which can lead to increased latency or problems with reaching a quorum in the future.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: Before attempting to rejoin, make sure that the old etcd data directory (data-dir) has been deleted on the new node (10.10.0.12) so that it can start synchronization from scratch as a new member of the cluster.&lt;/p&gt;

&lt;h4 id=&quot;removing-data-dir&quot;&gt;Removing data-dir&lt;/h4&gt;

&lt;p&gt;The path to the data directory (data-dir) depends on how etcd is installed (via kubeadm or as a separate service).&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;If etcd runs as a Static Pod (most common case, kubeadm)&lt;/p&gt;

    &lt;p&gt;Review the pod manifest on the node where etcd is already running:&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;data-dir&quot;&lt;/span&gt; /etc/kubernetes/manifests/etcd.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;The standard path in this case is usually: &lt;strong&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/var/lib/etcd&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;If etcd runs as a system service (Systemd)&lt;/p&gt;

    &lt;p&gt;If you installed etcd manually or via binary files, check the service configuration:&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;systemctl &lt;span class=&quot;nb&quot;&gt;cat &lt;/span&gt;etcd | &lt;span class=&quot;nb&quot;&gt;grep &lt;/span&gt;data-dir
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;Or look in the configuration file (if it exists): &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/etcd/etcd.conf&lt;/code&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Checking via running process information&lt;/p&gt;

    &lt;p&gt;You can see the path directly in the arguments of the running process:&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ps &lt;span class=&quot;nt&quot;&gt;-ef&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;grep &lt;/span&gt;etcd | &lt;span class=&quot;nb&quot;&gt;grep &lt;/span&gt;data-dir
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When you delete the false node entry (as described above) and want to try again:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Clear the directory on the node before restarting it&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  &lt;span class=&quot;nb&quot;&gt;rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-rf&lt;/span&gt; /var/lib/etcd/&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Note: Make sure you are deleting data on the correct node.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Check access permissions&lt;/strong&gt;: After clearing, make sure that the user running etcd (usually &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;etcd&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;root&lt;/code&gt;) has write permissions for this directory.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;configuring-the-load-balancer-for-control-plane-nodes-haproxykeepalived&quot;&gt;Configuring the load balancer for control plane nodes (HAProxy+Keepalived)&lt;/h2&gt;

&lt;p&gt;In a Kubernetes cluster with HA architecture, &lt;strong&gt;the load balancer must be started BEFORE the first control plane node is initialized&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is the “first brick rule”: we cannot build a wall if we have not determined where it will stand. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;controlPlaneEndpoint&lt;/code&gt; is an entry point that must be accessible from the first second of the cluster’s life.&lt;/p&gt;

&lt;p&gt;The sequence of actions we will follow:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;strong&gt;Deploy HAProxy+Keepalived (10.10.0.100)&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;It is not necessary to have “live” backends (control plane nodes) at this point, but the balancer must listen on port 6443 and be accessible on the network.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Add control plane nodes to the HAProxy config.&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;We haven’t run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm init&lt;/code&gt; on the first node yet, but we’ll add it and the IPs of other nodes to the HAProxy backends.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm init&lt;/code&gt; on the first control plane node.&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;When &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm&lt;/code&gt; tries to “knock” on &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.100:6443&lt;/code&gt;, the balancer will redirect this request to the very first node (where the API server just came up), and the initialization operation will complete successfully.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Let’s join the other Control Plane nodes.&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;Let’s use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm join ... --control-plane&lt;/code&gt;.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;temporary-hack-if-you-cant-set-up-haproxy-right-now&quot;&gt;Temporary “hack” (if you can’t set up HAProxy right now)&lt;/h3&gt;

&lt;p&gt;If you are unable to deploy a separate machine for the balancer right now, you can use an “IP address maneuver”:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Temporarily assign IP 10.10.0.100 to the first Control Plane node&lt;/strong&gt; as an additional one (via alias).&lt;/li&gt;
  &lt;li&gt;Run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm init&lt;/code&gt;. The system will see “itself” at this address and complete the configuration.&lt;/li&gt;
  &lt;li&gt;Later, when you deploy the real HAProxy, move this IP there.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;configuring-haproxykeepalived&quot;&gt;Configuring HAProxy+Keepalived&lt;/h3&gt;

&lt;p&gt;To make our load balancer truly fault-tolerant (High Availability), we need to configure &lt;strong&gt;Keepalived&lt;/strong&gt;. It will allow two &lt;strong&gt;HAProxy&lt;/strong&gt; nodes to share a single “floating” IP address (Virtual IP — VIP).&lt;/p&gt;

&lt;h4 id=&quot;netplan-configuration&quot;&gt;Netplan configuration&lt;/h4&gt;

&lt;p&gt;We will leave the address &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.100&lt;/code&gt; for Keepalived, and in the network settings section of the HAProxy virtual machine (we will have two of them), we will do the following:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/netplan/60-static-ip.yaml&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;permissions&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;!!str&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;‘0600’&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;network:&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;version: 2&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;ethernets:&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;enp0s2:&lt;/span&gt;
            &lt;span class=&quot;s&quot;&gt;addresses:&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;- 10.10.0.101/24 # Real IP of node LB1 (for the second one it will be .102)&lt;/span&gt;
            &lt;span class=&quot;s&quot;&gt;routes:&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;- to: 10.10.0.0/24&lt;/span&gt;
                &lt;span class=&quot;s&quot;&gt;scope: link&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We will specify the address &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.101&lt;/code&gt; for the primary balancer and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.102&lt;/code&gt; for the backup.&lt;/p&gt;

&lt;h4 id=&quot;keepalived-configuration&quot;&gt;Keepalived configuration&lt;/h4&gt;

&lt;p&gt;Add the following block to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;write_files&lt;/code&gt;. This configuration will force Keepalived to monitor the status of HAProxy and transfer the VIP to another node if the service or machine goes down.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;write_files&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/keepalived/keepalived.conf&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;vrrp_script check_haproxy {&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;script “killall -0 haproxy” # Check if the process is alive&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;interval 2&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;weight 2&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;}&lt;/span&gt;

      &lt;span class=&quot;s&quot;&gt;vrrp_instance VI_1 {&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;state MASTER              # On the second node, specify BACKUP&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;interface enp0s2          # Name of your interface&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;virtual_router_id 51      # Must be the same for both LBs&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;priority 101              # On the second node, specify 100&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;advert_int 1&lt;/span&gt;

          &lt;span class=&quot;s&quot;&gt;authentication {&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;auth_type PASS&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;auth_pass k8s_secret  # Shared password&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;}&lt;/span&gt;

          &lt;span class=&quot;s&quot;&gt;virtual_ipaddress {&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;10.10.0.100/24        # Your VIP address for Cluster Endpoint&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;}&lt;/span&gt;

          &lt;span class=&quot;s&quot;&gt;track_script {&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;check_haproxy&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;}&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;kernel-configuration-sysctl&quot;&gt;Kernel configuration (Sysctl)&lt;/h4&gt;

&lt;p&gt;In order for HAProxy to “sit” on the IP address &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.100&lt;/code&gt;, which does not yet belong to it (until Keepalived raises it), you need to enable &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;nonlocal_bind&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let’s add this to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;write_files&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;write_files&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/sysctl.d/99-kubernetes-lb.conf&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;net.ipv4.ip_nonlocal_bind = 1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And let’s add commands to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;runcmd&lt;/code&gt; for application:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;runcmd&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;sysctl --system&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;netplan apply&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;systemctl enable --now haproxy&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;systemctl enable --now keepalived&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;how-it-works-together&quot;&gt;How it works together&lt;/h4&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;strong&gt;HAProxy&lt;/strong&gt; listens on port 6443, but it only “sees” traffic coming to the VIP &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.100&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Keepalived&lt;/strong&gt; keeps the address &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.100&lt;/code&gt; on the active node (MASTER).&lt;/li&gt;
  &lt;li&gt;When we run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm init --control-plane-endpoint “10.10.0.100:6443”&lt;/code&gt;, the request goes to the VIP -&amp;gt; hits HAProxy -&amp;gt; is redirected to the first available Control Plane node.&lt;/li&gt;
  &lt;li&gt;If the first balancer goes down, the second (BACKUP) will instantly take over the IP &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.100&lt;/code&gt;, and our Kubernetes cluster will continue to work without any connection interruptions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To deploy a fault-tolerant balancer, we need to put together settings from:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-config.yaml&lt;/code&gt; — time zone settings and network settings&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-user.yaml&lt;/code&gt; — we will use the &lt;strong&gt;k8sadmin&lt;/strong&gt; user, whose &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;containerd&lt;/code&gt; group we will replace with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;haproxy&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-lb.yaml&lt;/code&gt; — settings specific to deploying and launching balancer nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s create &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-lb.yaml&lt;/code&gt;&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;cat &amp;lt;&amp;lt; &apos;EOF&apos; &amp;gt; snipets/cloud-init-lb.yaml&lt;/span&gt;

&lt;span class=&quot;na&quot;&gt;package_update&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;package_upgrade&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;

&lt;span class=&quot;na&quot;&gt;packages&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;haproxy&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;keepalived&lt;/span&gt;

&lt;span class=&quot;na&quot;&gt;write_files&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/sysctl.d/99-haproxy.conf&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;net.ipv4.ip_nonlocal_bind = 1&lt;/span&gt;

  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/haproxy/haproxy.cfg&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;global&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;log /dev/log local0&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;user haproxy&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;group haproxy&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;daemon&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;stats socket /run/haproxy/admin.sock mode 660 level admin&lt;/span&gt;

      &lt;span class=&quot;s&quot;&gt;defaults&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;log     global&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;mode    tcp&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;option  tcplog&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;timeout connect 5000&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;timeout client  50000&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;timeout server  50000&lt;/span&gt;

      &lt;span class=&quot;s&quot;&gt;frontend k8s-api&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;bind *:6443&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;default_backend k8s-api-backend&lt;/span&gt;

      &lt;span class=&quot;s&quot;&gt;backend k8s-api-backend&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;balance roundrobin&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;option tcp-check&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;timeout server 2h&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;timeout client 2h&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;# Here we specify the parameters of control plane node that we know&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;server cp-1 10.10.0.21:6443 check check-ssl verify none fall 3 rise 2&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;server cp-2 10.10.0.22:6443 check check-ssl verify none fall 3 rise 2&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;server cp-3 10.10.0.23:6443 check check-ssl verify none fall 3 rise 2&lt;/span&gt;

      &lt;span class=&quot;s&quot;&gt;listen stats&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;bind *:8404&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;mode http&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;stats enable&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;stats uri /stats&lt;/span&gt;

  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/keepalived/keepalived.conf&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;vrrp_script check_haproxy {&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;script &quot;killall -0 haproxy&quot;&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;interval 2&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;weight 2&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;}&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;vrrp_instance VI_1 {&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;state ${LB_STATE} # lb1 will be MASTER, lb2 will be BACKUP&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;interface enp0s2&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;virtual_router_id 51&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;priority ${LB_PRIORITY} # for lb1 it will be 101, for lb2 it will be 100&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;advert_int 1&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;authentication {&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;auth_type PASS&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;auth_pass k8s_pwd # Replace k8s_pwd with a strong password&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;}&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;virtual_ipaddress {&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;${LB_IP} # general balancer address 10.10.0.100/24&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;}&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;track_script {&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;check_haproxy&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;}&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;na&quot;&gt;runcmd&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;sysctl --system&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;systemctl enable --now haproxy&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;systemctl enable --now keepalived&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;deploying-haproxykeepalived&quot;&gt;Deploying HAProxy+Keepalived&lt;/h3&gt;

&lt;p&gt;Let’s create virtual machines for HAProxy+Keepalived:&lt;/p&gt;

&lt;p&gt;To create a fault-tolerant balancer, we need two almost identical commands to run. The main difference between them is in the Keepalived settings (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;state&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;priority&lt;/code&gt;) and the individual IP addresses of the nodes.&lt;/p&gt;

&lt;p&gt;Let’s start the first balancer node&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Real IP address for the interface (netplan)&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;VM_IP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;10.10.0.101/24&quot;&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Parameters for keepalived.conf&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;LB_IP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;10.10.0.100/24&quot;&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;LB_STATE&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;MASTER&quot;&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;LB_PRIORITY&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;101&quot;&lt;/span&gt;

multipass launch &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; lb1 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cpus&lt;/span&gt; 1 &lt;span class=&quot;nt&quot;&gt;--memory&lt;/span&gt; 1G &lt;span class=&quot;nt&quot;&gt;--disk&lt;/span&gt; 5G &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--network&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;en0,mode&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;manual &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cloud-init&lt;/span&gt; &amp;lt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;yq eval-all &lt;span class=&quot;s1&quot;&gt;&apos;
    # 1. Merging all files into a single object
    . as $item ireduce ({}; . *+ $item) |

    # 2. Running netplan apply after sysctl
    .runcmd |= (
      filter(. != &quot;netplan apply&quot;) |
      (to_entries | .[] | select(.value == &quot;sysctl --system&quot;) | .key) as $idx |
      .[:$idx+1] + [&quot;netplan apply&quot;] + .[$idx+1:]
    ) |
    .runcmd[].headComment = &quot;&quot; |

    # 3. Replacing a user group
    with(.users[] | select(.name == &quot;k8sadmin&quot;);
      .groups |= sub(&quot;containerd&quot;, &quot;haproxy&quot;)
    ) |

    #4. Configuring IP for application via Netplan
    with(.write_files[] | select(.path == &quot;/etc/netplan/60-static-ip.yaml&quot;);
      .content |= (from_yaml | .network.ethernets.enp0s2.addresses += [strenv(VM_IP)] | to_yaml)
    ) |

    # 5. Replace variables ${LB_...} in all write_files files
    with(.write_files[];
      .content |= sub(&quot;\${LB_STATE}&quot;, strenv(LB_STATE)) |
      .content |= sub(&quot;\${LB_PRIORITY}&quot;, strenv(LB_PRIORITY)) |
      .content |= sub(&quot;\${LB_IP}&quot;, strenv(LB_IP))
    )
  &apos;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  snipets/cloud-init-config.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  snipets/cloud-init-user.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  snipets/cloud-init-lb.yaml&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And the second node of the balancer&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Real IP address for the interface (netplan)&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;VM_IP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;10.10.0.102/24&quot;&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Parameters for keepalived.conf&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;LB_IP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;10.10.0.100/24&quot;&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;LB_STATE&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;BACKUP&quot;&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;LB_PRIORITY&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;100&quot;&lt;/span&gt;

multipass launch &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; lb1 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cpus&lt;/span&gt; 1 &lt;span class=&quot;nt&quot;&gt;--memory&lt;/span&gt; 1G &lt;span class=&quot;nt&quot;&gt;--disk&lt;/span&gt; 5G &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--network&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;en0,mode&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;manual &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cloud-init&lt;/span&gt; &amp;lt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;yq eval-all &lt;span class=&quot;s1&quot;&gt;&apos;
    # 1. Merging all files into a single object
    . as $item ireduce ({}; . *+ $item) |

    # 2. Running netplan apply after sysctl
    .runcmd |= (
      filter(. != &quot;netplan apply&quot;) |
      (to_entries | .[] | select(.value == &quot;sysctl --system&quot;) | .key) as $idx |
      .[:$idx+1] + [&quot;netplan apply&quot;] + .[$idx+1:]
    ) |
    .runcmd[].headComment = &quot;&quot; |

    # 3. Replacing a user group
    with(.users[] | select(.name == &quot;k8sadmin&quot;);
      .groups |= sub(&quot;containerd&quot;, &quot;haproxy&quot;)
    ) |

    #4. Configuring IP for application via Netplan
    with(.write_files[] | select(.path == &quot;/etc/netplan/60-static-ip.yaml&quot;);
      .content |= (from_yaml | .network.ethernets.enp0s2.addresses += [strenv(VM_IP)] | to_yaml)
    ) |

    # 5. Replace variables ${LB_...} in all write_files files
    with(.write_files[];
      .content |= sub(&quot;\${LB_STATE}&quot;, strenv(LB_STATE)) |
      .content |= sub(&quot;\${LB_PRIORITY}&quot;, strenv(LB_PRIORITY)) |
      .content |= sub(&quot;\${LB_IP}&quot;, strenv(LB_IP))
    )
  &apos;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  snipets/cloud-init-config.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  snipets/cloud-init-user.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  snipets/cloud-init-lb.yaml&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;After launching, let’s check for the presence of IP &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.100&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let’s go to any machine and check if the address &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.100&lt;/code&gt; has appeared:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;multipass &lt;span class=&quot;nb&quot;&gt;exec &lt;/span&gt;lb1 &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; ip addr show enp0s2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;what-to-do-after-launch&quot;&gt;What to do after launch?&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Check statistics&lt;/strong&gt;: Open &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://10.10.0.100:8404/stats&lt;/code&gt; in your browser. You will see that the backends (our control plane nodes) are marked in red (because they have not been initialized yet) — this is normal.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Launch Kubernetes&lt;/strong&gt;: Now we can run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm init&lt;/code&gt; on the first Control Plane node. Since VIP &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.100&lt;/code&gt; is already active and HAProxy is listening on port &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;6443&lt;/code&gt;, there will be no timeout error.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;deploying-the-control-plane&quot;&gt;Deploying the Control Plane&lt;/h2&gt;

&lt;p&gt;Let’s gather the cloud-init settings for deploying our control plane nodes. They will be similar to the ones we used to create etcd nodes, but with some differences.&lt;/p&gt;

&lt;p&gt;According to the &lt;a href=&quot;https://andygol-k8s.netlify.app/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin&quot;&gt;recommendations&lt;/a&gt;, we will allocate at least 2 CPU cores and 2 GB of RAM to the control panel node. For the first control panel node, we will use the IP address &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.21&lt;/code&gt;. We will also install HAProxy as a local load balancer for accessing etcd nodes.&lt;/p&gt;

&lt;h3 id=&quot;configuring-a-local-load-balancer-for-access-to-etcd-nodes&quot;&gt;Configuring a local load balancer for access to etcd nodes&lt;/h3&gt;

&lt;p&gt;To access etcd nodes, we will deploy a local load balancer. Each API server will refer to its own load balancer (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;127.0.0.1:2379&lt;/code&gt;). This approach is called &lt;strong&gt;“Sidecar Load Balancing”&lt;/strong&gt; (or local proxy). It provides maximum fault tolerance: even if the network between nodes starts to “storm,” each API server will have its own local path to etcd.&lt;/p&gt;

&lt;p&gt;Since we are doing this for a Kubernetes cluster, the best way to implement this is to use &lt;a href=&quot;https://andygol-k8s.netlify.app/docs/tasks/configure-pod-container/static-pod/&quot;&gt;Static Pods&lt;/a&gt;. The node manager (kubelet) will start and maintain HAProxy itself.&lt;/p&gt;

&lt;h4 id=&quot;preparing-the-haproxy-configuration&quot;&gt;Preparing the HAProxy configuration&lt;/h4&gt;

&lt;p&gt;For the control panel nodes, create a file with HAProxy settings &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/haproxy-lbaas/haproxy.cfg&lt;/code&gt; to balance traffic to etcd nodes.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; snipets/cloud-init-cp-haproxy.yaml&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;write_files&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# Configuring HAProxy to access etcd nodes&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/haproxy-lbaas/haproxy.cfg&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;global&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;log /dev/log local0&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;user haproxy&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;group haproxy&lt;/span&gt;

      &lt;span class=&quot;s&quot;&gt;defaults&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;log global&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;mode tcp&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;option tcplog&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;timeout connect 5000ms&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;timeout client 50000ms&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;timeout server 50000ms&lt;/span&gt;

      &lt;span class=&quot;s&quot;&gt;frontend etcd-local&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;bind 127.0.0.1:2379&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;description &quot;Local proxy for etcd cluster&quot;&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;default_backend etcd-cluster&lt;/span&gt;

      &lt;span class=&quot;s&quot;&gt;backend etcd-cluster&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;option tcp-check&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;# Important: we use roundrobin for load balancing&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;balance roundrobin&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;server etcd-1 10.10.0.11:2379 check inter 2000 rise 2 fall 3&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;server etcd-2 10.10.0.12:2379 check inter 2000 rise 2 fall 3&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;server etcd-3 10.10.0.13:2379 check inter 2000 rise 2 fall 3&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;creating-a-static-pod-for-haproxy&quot;&gt;Creating a Static Pod for HAProxy&lt;/h4&gt;

&lt;p&gt;Let’s make &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubelet&lt;/code&gt; run HAProxy. Let’s create a manifest in the static pods folder (by default, this is &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/manifests/&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Let’s create the file &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/manifests/etcd-proxy.yaml&lt;/code&gt; with the HAProxy static pod manifest:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; snipets/cloud-init-cp-haproxy-manifest.yaml&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;write_files&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# Manifest of static HAProxy proxy for balancing traffic to etcd nodes&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/kubernetes/manifests/etcd-haproxy.yaml&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;|&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;apiVersion: v1&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;kind: Pod&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;metadata:&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;name: etcd-haproxy&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;namespace: kube-system&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;labels:&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;component: etcd-haproxy&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;tier: control-plane&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;spec:&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;containers:&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;- name: etcd-haproxy&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;image: haproxy:2.8-alpine # Використовуємо легкий образ&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;resources:&lt;/span&gt;
            &lt;span class=&quot;s&quot;&gt;requests:&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;cpu: 100m&lt;/span&gt;
              &lt;span class=&quot;s&quot;&gt;memory: 100Mi&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;volumeMounts:&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;- name: haproxy-config&lt;/span&gt;
            &lt;span class=&quot;s&quot;&gt;mountPath: /usr/local/etc/haproxy/haproxy.cfg&lt;/span&gt;
            &lt;span class=&quot;s&quot;&gt;readOnly: true&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;hostNetwork: true # Важливо: под має бачити 127.0.0.1 хоста&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;volumes:&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;- name: haproxy-config&lt;/span&gt;
          &lt;span class=&quot;s&quot;&gt;hostPath:&lt;/span&gt;
            &lt;span class=&quot;s&quot;&gt;path: /etc/haproxy-lbaas/haproxy.cfg&lt;/span&gt;
            &lt;span class=&quot;s&quot;&gt;type: File&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;EOF&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;deploying-the-first-control-panel-node&quot;&gt;Deploying the first control panel node&lt;/h4&gt;

&lt;p&gt;Let’s deploy the first control panel node &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-1&lt;/code&gt; with IP &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.21/24&lt;/code&gt;&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;VM_IP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;10.10.0.21/24&quot;&lt;/span&gt;

multipass launch &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; cp-1 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cpus&lt;/span&gt; 2 &lt;span class=&quot;nt&quot;&gt;--memory&lt;/span&gt; 2.5G &lt;span class=&quot;nt&quot;&gt;--disk&lt;/span&gt; 8G &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--network&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;en0,mode&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;manual &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cloud-init&lt;/span&gt; &amp;lt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt; yq eval-all &lt;span class=&quot;s1&quot;&gt;&apos;
      # Merge all files into a single object
      . as $item ireduce ({}; . *+ $item) |

      # Update network configuration
      with(.write_files[] | select(.path == &quot;/etc/netplan/60-static-ip.yaml&quot;);
        .content |= (
          from_yaml |
          .network.ethernets.enp0s2.addresses += [strenv(VM_IP)] |
          to_yaml
        )
      ) &apos;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-config.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-user.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-base.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-cp-haproxy.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-cp-haproxy-manifest.yaml&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;After spinning up the control panel node, copy the following files from any etcd node to the &lt;strong&gt;first node&lt;/strong&gt; of the control panel (this will not be necessary for other control panel nodes during the first two hours after initialization until the Secret with keys is removed by the system).&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# 1. Prepare the files on the source node (10.10.0.11):&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Copy them to a temporary folder and change the owner to the current user so that scp can read them.&lt;/span&gt;
ssh &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; ~/.ssh/k8s_cluster_key k8sadmin@10.10.0.11 &lt;span class=&quot;s2&quot;&gt;&quot; &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
  mkdir -p /tmp/cert/ &amp;amp;&amp;amp; &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
  sudo cp /etc/kubernetes/pki/etcd/ca.crt /tmp/cert/ &amp;amp;&amp;amp; &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
  sudo cp /etc/kubernetes/pki/apiserver-etcd-client.* /tmp/cert/ &amp;amp;&amp;amp; &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
  sudo chown k8sadmin:k8sadmin /tmp/cert/* &quot;&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# 2. Transferring files between nodes via your local terminal:&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Use quotation marks to handle wildcards (*) on the remote side&lt;/span&gt;
scp &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; ~/.ssh/k8s_cluster_key &lt;span class=&quot;nt&quot;&gt;-r&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;k8sadmin@10.10.0.11:/tmp/cert/&apos;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;k8sadmin@10.10.0.21:/tmp&apos;&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# 3. Placing files on the target node (10.10.0.21):&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Create a folder (if it does not exist), move the files, and restore root privileges.&lt;/span&gt;
ssh &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; ~/.ssh/k8s_cluster_key k8sadmin@10.10.0.21 &lt;span class=&quot;s2&quot;&gt;&quot; &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
  sudo mkdir -p /etc/kubernetes/pki/etcd/ &amp;amp;&amp;amp; &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
  sudo mv /tmp/cert/ca.crt /etc/kubernetes/pki/etcd/ &amp;amp;&amp;amp; &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
  sudo chown root:root /etc/kubernetes/pki/etcd/ca.crt &amp;amp;&amp;amp; &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
  sudo mv /tmp/cert/apiserver-etcd-client.* /etc/kubernetes/pki/ &amp;amp;&amp;amp; &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
  sudo chown root:root /etc/kubernetes/pki/apiserver-etcd-client.*&quot;&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;#4. Cleaning temporary files:&lt;/span&gt;
ssh &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; ~/.ssh/k8s_cluster_key k8sadmin@10.10.0.11 &lt;span class=&quot;s2&quot;&gt;&quot;rm -rf /tmp/cert&quot;&lt;/span&gt;
ssh &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; ~/.ssh/k8s_cluster_key k8sadmin@10.10.0.21 &lt;span class=&quot;s2&quot;&gt;&quot;rm -rf /tmp/cert&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;checking-the-use-of-static-pods-haproxy&quot;&gt;Checking the use of Static Pods HAProxy&lt;/h4&gt;

&lt;p&gt;We placed the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;etcd-proxy.yaml&lt;/code&gt; manifest in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/manifests/&lt;/code&gt;. However, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubelet&lt;/code&gt; ignores this folder until it receives a command to start (this will happen after the control panel node is initialized).&lt;/p&gt;

&lt;p&gt;In addition, during the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;preflight&lt;/code&gt; phase, the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm&lt;/code&gt; command attempts to verify the availability of etcd &lt;strong&gt;before&lt;/strong&gt; any cluster components start running. Since our HAProxy is supposed to run as a Pod, it is not yet running, port &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;127.0.0.1:2379&lt;/code&gt; is closed, and we will get a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;connection refused&lt;/code&gt; error when attempting to initialize the control plane node.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-log&quot;&gt;[preflight] Running pre-flight checks
	[WARNING ExternalEtcdVersion]: Get &quot;https://127.0.0.1:2379/version&quot;: dial tcp 127.0.0.1:2379: connect: connection refused
&lt;/code&gt;&lt;/pre&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Checking via cURL (the fastest way)&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;Since we are using TLS, we will need the certificates that we have already prepared for kubeadm. Let’s try to access etcd via the local port:&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;curl &lt;span class=&quot;nt&quot;&gt;--cacert&lt;/span&gt; /etc/kubernetes/pki/etcd/ca.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
     &lt;span class=&quot;nt&quot;&gt;--cert&lt;/span&gt; /etc/kubernetes/pki/apiserver-etcd-client.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
     &lt;span class=&quot;nt&quot;&gt;--key&lt;/span&gt; /etc/kubernetes/pki/apiserver-etcd-client.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
     https://127.0.0.1:2379/health
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;&lt;strong&gt;Expected result:&lt;/strong&gt; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;{“health”:“true”}&lt;/code&gt;. If we get this response, it means that HAProxy is successfully forwarding traffic to one of the nodes in our etcd cluster.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Checking the status of Static Pod&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;Let’s check if the HAProxy container has started at all. Since &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl&lt;/code&gt; may not work if etcd is unavailable, use the container runtime tool (in our case, crictl):&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# For containerd (standard for modern K8s)&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;crictl ps | &lt;span class=&quot;nb&quot;&gt;grep &lt;/span&gt;etcd-haproxy

&lt;span class=&quot;c&quot;&gt;# View proxy logs&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;crictl logs &lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;crictl ps &lt;span class=&quot;nt&quot;&gt;-q&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; etcd-haproxy&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;The HAProxy logs should contain entries about successful health checks (Health check passed) for backend nodes 10.10.0.11, .12, .13.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Checking via system sockets&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;Let’s make sure that HAProxy is actually listening on port 2379 on the local interface:&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;ss &lt;span class=&quot;nt&quot;&gt;-tulpn&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;grep &lt;/span&gt;2379
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;We should see that the process (haproxy) is listening on &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;127.0.0.1:2379&lt;/code&gt;.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;configuring-kubeadm&quot;&gt;Configuring kubeadm&lt;/h3&gt;

&lt;p&gt;Let’s create a file named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm-config.yaml&lt;/code&gt; in the home directory of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;k8sadmin&lt;/code&gt; user on the first node to initialize the control panel.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ssh &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; ~/.ssh/k8s_cluster_key k8sadmin@10.10.0.21 &lt;span class=&quot;s2&quot;&gt;&quot;cat &amp;lt;&amp;lt; &apos;EOF&apos; &amp;gt; &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\$&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;HOME/kubeadm-config.yaml
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
kubernetesVersion: &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;v1.34.3&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
controlPlaneEndpoint: &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;10.10.0.100:6443&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
etcd:
  external:
    endpoints:
      - https://127.0.0.1:2379
    caFile: /etc/kubernetes/pki/etcd/ca.crt
    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
networking:
  serviceSubnet: &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;10.96.0.0/16&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
  podSubnet: &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;10.244.0.0/16&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
  dnsDomain: &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;cluster.local&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
EOF&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;starting-initialization-and-bypassing-externaletcdversion-verification&quot;&gt;Starting initialization and bypassing ExternalEtcdVersion verification&lt;/h3&gt;

&lt;p&gt;When &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm init&lt;/code&gt; starts, it attempts to verify access to the external etcd cluster. However, we have “wrapped” access to the cluster in a local HAProxy, which &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubelet&lt;/code&gt; runs as a static pod. However, at this point, we have not yet started &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kube-apiserver&lt;/code&gt;, which will take over the management of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubelet&lt;/code&gt;, which in turn will start the pod with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;haproxy&lt;/code&gt;. Therefore, we need to disable “preflight checks” for etcd (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--ignore-preflight-errors=ExternalEtcdVersion&lt;/code&gt;). To initialize the cluster on the control plane node, run the following command:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm init &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--config&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/kubeadm-config.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--upload-certs&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--ignore-preflight-errors&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ExternalEtcdVersion
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What to look for during initialization:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After running the command, watch for the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;[control-plane] Creating static pod manifest for “kube-apiserver”&lt;/code&gt; step. If the API server starts successfully, it means that it was able to connect to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;etcd&lt;/code&gt; through your local proxy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If the command stops again with an error&lt;/strong&gt;, check if there are any processes left in the system from previous attempts:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# If you need to completely reset the status before a new attempt&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm reset &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# After the reset, you will need to restart kubelet again to bring up the proxy.&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;systemctl restart kubelet
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Under normal circumstances, you will see the following log of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm init&lt;/code&gt; running&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;&lt;strong&gt;View log&lt;/strong&gt;&lt;/summary&gt;

  &lt;pre&gt;&lt;code class=&quot;language-log&quot;&gt;[init] Using Kubernetes version: v1.34.3
[preflight] Running pre-flight checks
	[WARNING ExternalEtcdVersion]: Get &quot;https://127.0.0.1:2379/version&quot;: dial tcp 127.0.0.1:2379: connect: connection refused
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using &apos;kubeadm config images pull&apos;
[certs] Using certificateDir folder &quot;/etc/kubernetes/pki&quot;
[certs] Generating &quot;ca&quot; certificate and key
[certs] Generating &quot;apiserver&quot; certificate and key
[certs] apiserver serving cert is signed for DNS names [cp-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.176 10.10.0.100]
[certs] Generating &quot;apiserver-kubelet-client&quot; certificate and key
[certs] Generating &quot;front-proxy-ca&quot; certificate and key
[certs] Generating &quot;front-proxy-client&quot; certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating &quot;sa&quot; key and public key
[kubeconfig] Using kubeconfig folder &quot;/etc/kubernetes&quot;
[kubeconfig] Writing &quot;admin.conf&quot; kubeconfig file
[kubeconfig] Writing &quot;super-admin.conf&quot; kubeconfig file
[kubeconfig] Writing &quot;kubelet.conf&quot; kubeconfig file
[kubeconfig] Writing &quot;controller-manager.conf&quot; kubeconfig file
[kubeconfig] Writing &quot;scheduler.conf&quot; kubeconfig file
[control-plane] Using manifest folder &quot;/etc/kubernetes/manifests&quot;
[control-plane] Creating static Pod manifest for &quot;kube-apiserver&quot;
[control-plane] Creating static Pod manifest for &quot;kube-controller-manager&quot;
[control-plane] Creating static Pod manifest for &quot;kube-scheduler&quot;
[kubelet-start] Writing kubelet environment file with flags to file &quot;/var/lib/kubelet/kubeadm-flags.env&quot;
[kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/instance-config.yaml&quot;
[patches] Applied patch of type &quot;application/strategic-merge-patch+json&quot; to target &quot;kubeletconfiguration&quot;
[kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/config.yaml&quot;
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory &quot;/etc/kubernetes/manifests&quot;
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.932756ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.2.176:6443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is healthy after 1.515332016s
[control-plane-check] kube-scheduler is healthy after 19.381430245s
[control-plane-check] kube-apiserver is healthy after 21.504025006s
[upload-config] Storing the configuration used in ConfigMap &quot;kubeadm-config&quot; in the &quot;kube-system&quot; Namespace
[kubelet] Creating a ConfigMap &quot;kubelet-config&quot; in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret &quot;kubeadm-certs&quot; in the &quot;kube-system&quot; Namespace
[upload-certs] Using certificate key:
7a088e936453ab3143f25cdb9827b8cac60888c75f91b9d6c2d08d23a32a2bc9
[mark-control-plane] Marking the node cp-1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node cp-1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: z28v5d.4vm6rzekoibear23
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the &quot;cluster-info&quot; ConfigMap in the &quot;kube-public&quot; namespace
[kubelet-finalize] Updating &quot;/etc/kubernetes/kubelet.conf&quot; to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run &quot;kubectl apply -f [podnetwork].yaml&quot; with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes running the following command on each as root:

  kubeadm join 10.10.0.100:6443 --token z28v5d.4vm6rzekoibear23 \
	--discovery-token-ca-cert-hash sha256:4c23033729b477d1fc30ae4b4041fe7dae70fa8defd5ecb57c571e969e00f8e0 \
	--control-plane --certificate-key 7a088e936453ab3143f25cdb9827b8cac60888c75f91b9d6c2d08d23a32a2bc9

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
&quot;kubeadm init phase upload-certs --upload-certs&quot; to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.0.100:6443 --token z28v5d.4vm6rzekoibear23 \
	--discovery-token-ca-cert-hash sha256:4c23033729b477d1fc30ae4b4041fe7dae70fa8defd5ecb57c571e969e00f8e0
&lt;/code&gt;&lt;/pre&gt;

&lt;/details&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;After starting initialization with error ignoring, it is important to ensure that the API server was able to connect to the database and is not simply “hanging” in a waiting state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Main check: API server status&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm init&lt;/code&gt; has passed the preflight stage, it will attempt to start &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kube-apiserver&lt;/code&gt;. If the API server cannot communicate with etcd through our proxy, it will continuously restart.&lt;/p&gt;

&lt;p&gt;Let’s check the API server logs:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo tail&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; /var/log/pods/kube-system_kube-apiserver-&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;/kube-apiserver/&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;.log
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;details&gt;
  &lt;summary&gt;&lt;strong&gt;View API server log&lt;/strong&gt;&lt;/summary&gt;

  &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;k8sadmin@cp-1:~&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo tail&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; /var/log/pods/kube-system_kube-apiserver-cp-1_70e58895431aff7a0cb441009519f1c6/kube-apiserver/0.log
2026-01-07T11:10:20.602046303+02:00 stderr F W0107 09:10:20.601902       1 logging.go:55] &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;core] &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;Channel &lt;span class=&quot;c&quot;&gt;#359 SubChannel #360]grpc: addrConn.createTransport failed to connect to {Addr: &quot;127.0.0.1:2379&quot;, ServerName: &quot;127.0.0.1:2379&quot;, BalancerAttributes: {&quot;&amp;lt;%!p(pickfirstleaf.managedByPickfirstKeyType={})&amp;gt;&quot;: &quot;&amp;lt;%!p(bool=true)&amp;gt;&quot; }}. Err: connection error: desc = &quot;transport: authentication handshake failed: context canceled&quot;&lt;/span&gt;
2026-01-07T11:10:20.617773217+02:00 stderr F W0107 09:10:20.617570       1 logging.go:55] &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;core] &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;Channel &lt;span class=&quot;c&quot;&gt;#363 SubChannel #364]grpc: addrConn.createTransport failed to connect to {Addr: &quot;127.0.0.1:2379&quot;, ServerName: &quot;127.0.0.1:2379&quot;, BalancerAttributes: {&quot;&amp;lt;%!p(pickfirstleaf.managedByPickfirstKeyType={})&amp;gt;&quot;: &quot;&amp;lt;%!p(bool=true)&amp;gt;&quot; }}. Err: connection error: desc = &quot;transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled&quot;&lt;/span&gt;
2026-01-07T11:10:20.634721419+02:00 stderr F W0107 09:10:20.634607       1 logging.go:55] &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;core] &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;Channel &lt;span class=&quot;c&quot;&gt;#367 SubChannel #368]grpc: addrConn.createTransport failed to connect to {Addr: &quot;127.0.0.1:2379&quot;, ServerName: &quot;127.0.0.1:2379&quot;, BalancerAttributes: {&quot;&amp;lt;%!p(pickfirstleaf.managedByPickfirstKeyType={})&amp;gt;&quot;: &quot;&amp;lt;%!p(bool=true)&amp;gt;&quot; }}. Err: connection error: desc = &quot;transport: authentication handshake failed: context canceled&quot;&lt;/span&gt;
2026-01-07T11:10:20.644114716+02:00 stderr F W0107 09:10:20.644002       1 logging.go:55] &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;core] &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;Channel &lt;span class=&quot;c&quot;&gt;#371 SubChannel #372]grpc: addrConn.createTransport failed to connect to {Addr: &quot;127.0.0.1:2379&quot;, ServerName: &quot;127.0.0.1:2379&quot;, BalancerAttributes: {&quot;&amp;lt;%!p(pickfirstleaf.managedByPickfirstKeyType={})&amp;gt;&quot;: &quot;&amp;lt;%!p(bool=true)&amp;gt;&quot; }}. Err: connection error: desc = &quot;transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled&quot;&lt;/span&gt;
2026-01-07T11:10:20.662781139+02:00 stderr F W0107 09:10:20.662426       1 logging.go:55] &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;core] &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;Channel &lt;span class=&quot;c&quot;&gt;#375 SubChannel #376]grpc: addrConn.createTransport failed to connect to {Addr: &quot;127.0.0.1:2379&quot;, ServerName: &quot;127.0.0.1:2379&quot;, BalancerAttributes: {&quot;&amp;lt;%!p(pickfirstleaf.managedByPickfirstKeyType={})&amp;gt;&quot;: &quot;&amp;lt;%!p(bool=true)&amp;gt;&quot; }}. Err: connection error: desc = &quot;transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled&quot;&lt;/span&gt;
2026-01-07T11:10:20.675026234+02:00 stderr F W0107 09:10:20.674872       1 logging.go:55] &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;core] &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;Channel &lt;span class=&quot;c&quot;&gt;#379 SubChannel #380]grpc: addrConn.createTransport failed to connect to {Addr: &quot;127.0.0.1:2379&quot;, ServerName: &quot;127.0.0.1:2379&quot;, BalancerAttributes: {&quot;&amp;lt;%!p(pickfirstleaf.managedByPickfirstKeyType={})&amp;gt;&quot;: &quot;&amp;lt;%!p(bool=true)&amp;gt;&quot; }}. Err: connection error: desc = &quot;transport: authentication handshake failed: context canceled&quot;&lt;/span&gt;
2026-01-07T11:10:20.860920026+02:00 stderr F W0107 09:10:20.860664       1 logging.go:55] &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;core] &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;Channel &lt;span class=&quot;c&quot;&gt;#383 SubChannel #384]grpc: addrConn.createTransport failed to connect to {Addr: &quot;127.0.0.1:2379&quot;, ServerName: &quot;127.0.0.1:2379&quot;, BalancerAttributes: {&quot;&amp;lt;%!p(pickfirstleaf.managedByPickfirstKeyType={})&amp;gt;&quot;: &quot;&amp;lt;%!p(bool=true)&amp;gt;&quot; }}. Err: connection error: desc = &quot;transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled&quot;&lt;/span&gt;
2026-01-07T11:11:10.4594371+02:00 stderr F I0107 09:11:10.459184       1 controller.go:667] quota admission added evaluator &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt;: replicasets.apps
2026-01-07T11:19:59.613078541+02:00 stderr F I0107 09:19:59.612638       1 cidrallocator.go:277] updated ClusterIP allocator &lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;Service CIDR 10.96.0.0/16
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;  &lt;/div&gt;

&lt;/details&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;This log indicates a very important stage: our &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kube-apiserver&lt;/code&gt; &lt;strong&gt;has successfully started&lt;/strong&gt;, but the initialization process went through a “struggle” to connect to etcd.&lt;/p&gt;

&lt;p&gt;Here’s what happened:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Error stage (Handshake failed)&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;The first lines of the log show errors: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;transport: authentication handshake failed: context canceled and dial tcp 127.0.0.1:2379: operation was canceled&lt;/code&gt;.&lt;/p&gt;

    &lt;p&gt;This means that:&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;The API server tried to connect to your HAProxy (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;127.0.0.1:2379&lt;/code&gt;).&lt;/li&gt;
      &lt;li&gt;The connection was established, but the TLS handshake was interrupted.&lt;/li&gt;
    &lt;/ul&gt;

    &lt;p&gt;&lt;strong&gt;Reason&lt;/strong&gt;: This happened at the same moment when &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm&lt;/code&gt; was still generating or substituting certificates, or when HAProxy had not yet established a stable session with the etcd backend nodes. This is normal behavior during a “cold” start of the control panel.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Success phase (Stabilization)&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;Note the last lines: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;I0107 09:11:10.459184 ... quota admission added evaluator for: replicasets.apps&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;I0107 09:19:59.612638 ... updated ClusterIP allocator for Service CIDR 10.96.0.0/16&lt;/code&gt;&lt;/p&gt;

    &lt;p&gt;This is a win:&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;&lt;strong&gt;The API server is alive&lt;/strong&gt;. If it couldn’t connect to etcd, it would just crash (CrashLoopBackOff) and you wouldn’t see any logs about &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cidrallocator&lt;/code&gt;.&lt;/li&gt;
    &lt;/ul&gt;

    &lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ClusterIP allocator&lt;/code&gt; message means that the API server has already started writing data to etcd and managing cluster resources.&lt;/p&gt;

    &lt;p&gt;The interval between entries (10 minutes) shows stable background operation of the controllers.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Component status&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;The fact that you see folders for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kube-controller-manager&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kube-scheduler&lt;/code&gt; in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/var/log/pods/&lt;/code&gt; confirms that &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm&lt;/code&gt; has successfully completed the manifest creation phase and all three core Control Plane components are running.&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;k8sadmin@cp-1:~&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo ls&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-la&lt;/span&gt; /var/log/pods/
total 28
drwxr-x---  7 root root   4096 Jan  8 22:23 &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
drwxr-xr-x 11 root syslog 4096 Jan  8 22:23 ..
drwxr-xr-x  3 root root   4096 Jan  8 22:23 kube-system_etcd-haproxy-cp-1_e2b3a81fe56706e845a17ba096c5dfad
drwxr-xr-x  3 root root   4096 Jan  8 22:23 kube-system_kube-apiserver-cp-1_e74afa6943effdf6bbdcfc384bd87bb6
drwxr-xr-x  3 root root   4096 Jan  8 22:23 kube-system_kube-controller-manager-cp-1_b635ba5e5439cc2c581bf61ca1e6fb9e
drwxr-xr-x  3 root root   4096 Jan  8 22:23 kube-system_kube-proxy-9qh54_9e21026d-0d6e-4f8c-a071-842149ffd24e
drwxr-xr-x  3 root root   4096 Jan  8 22:23 kube-system_kube-scheduler-cp-1_0cf013b3f4c49c84241ee3a56735a15d
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What to check now?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since the API server is responding, run the following commands to finally verify that the first node is working:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Checking nodes&lt;/strong&gt;: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl get nodes&lt;/code&gt; (&lt;em&gt;You should see cp-1 in NotReady status — this is normal because we haven’t installed Calico yet&lt;/em&gt;).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Proxy health check (via HAProxy)&lt;/strong&gt;: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl get --raw /healthz/etcd&lt;/code&gt; (&lt;em&gt;Should return &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ok&lt;/code&gt;&lt;/em&gt;).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Check etcd access points&lt;/strong&gt;: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl describe pod kube-apiserver-cp-1 -n kube-system | grep etcd&lt;/code&gt; (&lt;strong&gt;Make sure that only &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://127.0.0.1:2379&lt;/code&gt; appears there&lt;/strong&gt;).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Check HAProxy&lt;/strong&gt;: follow the link &lt;a href=&quot;http://10.10.0.100:8404/stats&quot;&gt;http://10.10.0.100:8404/stats&lt;/a&gt; to the load balancer dashboard in the control panel (&lt;em&gt;In the k8s-api-backend section, the status of the first control panel node should be UP&lt;/em&gt;)&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next, it is recommended to deploy CNI plugins for the event network.&lt;/p&gt;

&lt;h2 id=&quot;installing-calico&quot;&gt;Installing Calico&lt;/h2&gt;

&lt;p&gt;We will use Calico to create a pods network. The article “&lt;a href=&quot;https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises&quot;&gt;Install Calico networking and network policy for on-premises deployments&lt;/a&gt;” describes in detail the process of installing Calico on your own equipment.&lt;/p&gt;

&lt;p&gt;We will use Tigera Operator and custom resource definitions (CRDs). We will apply the following two manifests:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl create &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.31.3/manifests/operator-crds.yaml
kubectl create &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.31.3/manifests/tigera-operator.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then we will download the file with the custom resources needed to configure Calico.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;curl &lt;span class=&quot;nt&quot;&gt;-O&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.31.3/manifests/custom-resources-bpf.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;and specify the CIDR of the pod network as we specified it when initializing the first control panel node — &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.244.0.0/16&lt;/code&gt; (the default value in Calico is &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cidr: 192.168.0.0/16&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;After making the changes, apply the manifest to install Calico&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl create &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; custom-resources-bpf.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Track the installation using the command &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;watch kubectl get tigerastatus&lt;/code&gt;. After a few minutes (6-7 minutes), all Calico components will have a value of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;True&lt;/code&gt; in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;AVAILABLE&lt;/code&gt; column.&lt;/p&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;NAME                            AVAILABLE   PROGRESSING   DEGRADED   SINCE
apiserver                       True        False         False      4m9s
calico                          True        False         False      3m29s
goldmane                        True        False         False      3m39s
ippools                         True        False         False      6m4s
kubeproxy-monitor               True        False         False      6m15s
whisker                         True        False         False      3m19s
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Once the Calico components are available, the control plane node will transition to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;READY&lt;/code&gt; state.&lt;/p&gt;

&lt;h3 id=&quot;what-to-do-first-deploy-calico-or-run-kubeadm-join&quot;&gt;What to do first: deploy Calico or run kubeadm join?&lt;/h3&gt;

&lt;p&gt;Technically, you can do either, but option #1 (CNI before Join) is best practice.&lt;/p&gt;

&lt;h4 id=&quot;option-1-cni-first-then-join-recommended&quot;&gt;Option 1: CNI first, then Join (Recommended)&lt;/h4&gt;

&lt;p&gt;When you install CNI immediately after initializing the first node (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-1&lt;/code&gt;), the cluster network becomes operational immediately.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Node Status&lt;/strong&gt;: The first node quickly transitions to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Ready&lt;/code&gt; status.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;CoreDNS&lt;/strong&gt;: CoreDNS pods, which typically hang in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Pending&lt;/code&gt; status without a network, start up.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Joining new nodes&lt;/strong&gt;: When you join &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-2&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-3&lt;/code&gt;, they immediately receive network settings. System pods on new nodes will be able to start communicating with each other faster.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Convenience&lt;/strong&gt;: You can see the actual health status of each new node immediately after it joins.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;option-2-join-first-then-cni&quot;&gt;Option 2: Join first, then CNI&lt;/h4&gt;

&lt;p&gt;This is also a working scenario, but it looks more “alarming” during the process.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Node status&lt;/strong&gt;: All nodes (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-1&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-2&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-3&lt;/code&gt;) will be in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NotReady&lt;/code&gt; state.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;CoreDNS&lt;/strong&gt;: All system network components will be waiting.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Joining&lt;/strong&gt;: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm join&lt;/code&gt; will be successful because CNI is not required for the joining process itself (TLS Bootstrap and config copying) — the physical network between nodes is used here.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Risks&lt;/strong&gt;: If a problem arises with the network communication of the pods themselves during the join (for example, health checks of system components), it will be more difficult for you to understand whether this is a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;join&lt;/code&gt; problem or simply the absence of CNI.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We recommend following this order:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm init&lt;/code&gt; on &lt;strong&gt;cp-1&lt;/strong&gt;.&lt;/li&gt;
  &lt;li&gt;Required &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeconfig&lt;/code&gt; settings.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;CNI installation&lt;/strong&gt; (e.g., Cilium, Calico, or Flannel).&lt;/li&gt;
  &lt;li&gt;Check &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl get nodes&lt;/code&gt; (should be &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Ready&lt;/code&gt;).&lt;/li&gt;
  &lt;li&gt;Transfer certificates to new nodes (if necessary).&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm join&lt;/code&gt; for &lt;strong&gt;cp-2&lt;/strong&gt; and &lt;strong&gt;cp-3&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;deploying-and-joining-the-subsequent-control-panel-nodes&quot;&gt;Deploying and joining the subsequent control panel nodes&lt;/h2&gt;

&lt;p&gt;Let’s use the ready-made command for the first control panel node. Replace the IP address with the appropriate one for each node:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;cp-2.yaml&lt;/strong&gt; — &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.22&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;cp-3.yaml&lt;/strong&gt; — &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.23&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s create virtual machines:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;VM_IP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;10.10.0.22/24&quot;&lt;/span&gt;

multipass launch &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; cp-2 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cpus&lt;/span&gt; 2 &lt;span class=&quot;nt&quot;&gt;--memory&lt;/span&gt; 2.5G &lt;span class=&quot;nt&quot;&gt;--disk&lt;/span&gt; 8G &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--network&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;en0,mode&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;manual &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cloud-init&lt;/span&gt; &amp;lt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt; yq eval-all &lt;span class=&quot;s1&quot;&gt;&apos;
      # Merge all files into a single object
      . as $item ireduce ({}; . *+ $item) |

      # Update network configuration
      with(.write_files[] | select(.path == &quot;/etc/netplan/60-static-ip.yaml&quot;);
        .content |= (
          from_yaml |
          .network.ethernets.enp0s2.addresses += [strenv(VM_IP)] |
          to_yaml
        )
      ) &apos;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-config.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-user.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-base.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-cp-haproxy.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-cp-haproxy-manifest.yaml&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Once the node is ready, we will join it as another control panel node. Using the output obtained during the initialization of the first control panel node, we will add the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--ignore-preflight-errors=ExternalEtcdVersion&lt;/code&gt; parameter to it, just as we did during the initialization of the first control panel node:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm &lt;span class=&quot;nb&quot;&gt;join &lt;/span&gt;10.10.0.100:6443 &lt;span class=&quot;nt&quot;&gt;--token&lt;/span&gt; z28v5d.4vm6rzekoibear23 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
        &lt;span class=&quot;nt&quot;&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:4c23033729b477d1fc30ae4b4041fe7dae70fa8defd5ecb57c571e969e00f8e0 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
        &lt;span class=&quot;nt&quot;&gt;--control-plane&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--certificate-key&lt;/span&gt; 7a088e936453ab3143f25cdb9827b8cac60888c75f91b9d6c2d08d23a32a2bc9 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
        &lt;span class=&quot;nt&quot;&gt;--ignore-preflight-errors&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ExternalEtcdVersion
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The log of joining the second control panel node will look like this&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;&lt;strong&gt;View the log of joining the second node&lt;/strong&gt;&lt;/summary&gt;

  &lt;pre&gt;&lt;code class=&quot;language-log&quot;&gt;k8sadmin@cp-2:~$ sudo kubeadm join 10.10.0.100:6443 --token z28v5d.4vm6rzekoibear23 \
        --discovery-token-ca-cert-hash sha256:4c23033729b477d1fc30ae4b4041fe7dae70fa8defd5ecb57c571e969e00f8e0 \
        --control-plane --certificate-key 7a088e936453ab3143f25cdb9827b8cac60888c75f91b9d6c2d08d23a32a2bc9 \
        --ignore-preflight-errors=ExternalEtcdVersion
[preflight] Running pre-flight checks
[preflight] Reading configuration from the &quot;kubeadm-config&quot; ConfigMap in namespace &quot;kube-system&quot;...
[preflight] Use &apos;kubeadm init phase upload-config kubeadm --config your-config-file&apos; to re-upload it.
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using &apos;kubeadm config images pull&apos;
[download-certs] Downloading the certificates in Secret &quot;kubeadm-certs&quot; in the &quot;kube-system&quot; Namespace
[download-certs] Saving the certificates to the folder: &quot;/etc/kubernetes/pki&quot;
[certs] Using certificateDir folder &quot;/etc/kubernetes/pki&quot;
[certs] Generating &quot;apiserver&quot; certificate and key
[certs] apiserver serving cert is signed for DNS names [cp-2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.177 10.10.0.100]
[certs] Generating &quot;apiserver-kubelet-client&quot; certificate and key
[certs] Generating &quot;front-proxy-client&quot; certificate and key
[certs] Valid certificates and keys now exist in &quot;/etc/kubernetes/pki&quot;
[certs] Using the existing &quot;sa&quot; key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder &quot;/etc/kubernetes&quot;
[kubeconfig] Writing &quot;admin.conf&quot; kubeconfig file
[kubeconfig] Writing &quot;controller-manager.conf&quot; kubeconfig file
[kubeconfig] Writing &quot;scheduler.conf&quot; kubeconfig file
[control-plane] Using manifest folder &quot;/etc/kubernetes/manifests&quot;
[control-plane] Creating static Pod manifest for &quot;kube-apiserver&quot;
[control-plane] Creating static Pod manifest for &quot;kube-controller-manager&quot;
[control-plane] Creating static Pod manifest for &quot;kube-scheduler&quot;
[check-etcd] Skipping etcd check in external mode
[kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/instance-config.yaml&quot;
[patches] Applied patch of type &quot;application/strategic-merge-patch+json&quot; to target &quot;kubeletconfiguration&quot;
[kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/config.yaml&quot;
[kubelet-start] Writing kubelet environment file with flags to file &quot;/var/lib/kubelet/kubeadm-flags.env&quot;
[kubelet-start] Starting the kubelet
[control-plane-join] Using external etcd - no local stacked instance added
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.005951104s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
[mark-control-plane] Marking the node cp-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node cp-2 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.2.177:6443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is healthy after 2.950097ms
[control-plane-check] kube-scheduler is healthy after 6.396945ms
[control-plane-check] kube-apiserver is healthy after 501.413278ms

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.


To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run &apos;kubectl get nodes&apos; to see this node join the cluster.
&lt;/code&gt;&lt;/pre&gt;

&lt;/details&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;p&gt;If a long time has passed between creating the first control panel node and joining subsequent nodes, you may encounter this error.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-log&quot;&gt;[preflight] You can also perform this action beforehand using &apos;kubeadm config images pull&apos;
[download-certs] Downloading the certificates in Secret &quot;kubeadm-certs&quot; in the &quot;kube-system&quot; Namespace
error: error execution phase control-plane-prepare/download-certs: error downloading certs: error downloading the secret: Secret &quot;kubeadm-certs&quot; was not found in the &quot;kube-system&quot; Namespace. This Secret might have expired. Please, run `kubeadm init phase upload-certs --upload-certs` on a control plane to generate a new one
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Since we use &lt;strong&gt;External Etcd&lt;/strong&gt;, the standard &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;upload-certs&lt;/code&gt; procedure does not work as described in most instructions, because Kubernetes does not store keys to external databases in its secrets for security reasons.&lt;/p&gt;

&lt;p&gt;If we need to add the next node or restore the current one, we need to perform a few additional steps.&lt;/p&gt;

&lt;p&gt;This method is used when automatic key loading (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--certificate-key&lt;/code&gt;) is not possible or the secret with the keys has expired.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Preparing the file system on the new node&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;On the new node (for example, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-4&lt;/code&gt;), create the necessary folders for certificates and manifests.&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# На новому вузлі&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; /etc/kubernetes/pki/etcd
&lt;span class=&quot;nb&quot;&gt;sudo mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; /etc/kubernetes/manifests
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Manual transfer of certificates&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;We need to copy &lt;strong&gt;9 files&lt;/strong&gt; from any working control panel node (for example, from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-1&lt;/code&gt;) to the new node. The paths must match exactly.&lt;/p&gt;

    &lt;table&gt;
      &lt;thead&gt;
        &lt;tr&gt;
          &lt;th&gt;File on source (cp-1)&lt;/th&gt;
          &lt;th&gt;Where to place on new (cp-4)&lt;/th&gt;
          &lt;th&gt;Description&lt;/th&gt;
        &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
        &lt;tr&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/ca.crt&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/ca.crt&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;Cluster CA&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/ca.key&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/ca.key&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;CA key&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/sa.pub&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/sa.pub&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;ServiceAccount key (public)&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/sa.key&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/sa.key&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;ServiceAccount key (private)&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/front-proxy-ca.crt&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/front-proxy-ca.crt&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;CA for API aggregation&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/front-proxy-ca.key&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/front-proxy-ca.key&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;Front Proxy Key&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/apiserver-etcd-client.crt&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/apiserver-etcd-client.crt&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;Client certificate for etcd&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/apiserver-etcd-client.key&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/apiserver-etcd-client.key&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;Client key for etcd&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/etcd/ca.crt&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/etcd/ca.crt&lt;/code&gt;&lt;/td&gt;
          &lt;td&gt;CA for etcd itself&lt;/td&gt;
        &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;

    &lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: After copying, set the correct permissions for the key files (.key), otherwise kubeadm may complain about security:&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo chmod &lt;/span&gt;600 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  /etc/kubernetes/pki/apiserver-etcd-client.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  /etc/kubernetes/pki/ca.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  /etc/kubernetes/pki/front-proxy-ca.key  &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  /etc/kubernetes/pki/sa.key
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If you do not want to manually copy all 9 key and certificate files, but only copy the files &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/etcd/ca.crt&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/apiserver-etcd-client.key&lt;/code&gt;, and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/apiserver -etcd-client.crt&lt;/code&gt; files to the target machine, you can first run the join command with the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--certificate-key&lt;/code&gt; parameter (as in the case of Stacked Etcd), which will help copy most of the necessary files managed by the apiserver. After that, manually copy the three files for etcd and run the join command without the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--certificate-key&lt;/code&gt; parameter.&lt;/p&gt;

    &lt;p&gt;To speed up the copying process, you can use the following script&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos; &amp;gt; k8s-transit.sh
#!/bin/bash

# How to use
#  “Usage: &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$0&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &amp;lt;SRC_IP&amp;gt; &amp;lt;DST_IP&amp;gt; &amp;lt;USER&amp;gt; [SSH_KEY_PATH]”
#  “Example: k8s-transit.sh 10.10.0.21 10.10.0.178 k8sadmin ~/.ssh/id_rsa”

# --- DEFAULT VALUES ---
DEFAULT_SRC=&quot;10.10.0.21&quot;
DEFAULT_DST=&quot;10.10.0.178&quot;
DEFAULT_USR=&quot;k8sadmin&quot;
DEFAULT_KEY=&quot;~/.ssh/k8s_cluster_key&quot;
# ---------------------------

SRC=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:-&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$DEFAULT_SRC&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
DST=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;2&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:-&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$DEFAULT_DST&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
USR=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:-&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$DEFAULT_USR&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
KEY_PATH=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;4&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:-&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$DEFAULT_KEY&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;

# Create an option for the SSH key
SSH_OPTS=&quot;&quot;
if [ -f &quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;eval echo&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$KEY_PATH&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot; ]; then
    SSH_OPTS=&quot;-i &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$KEY_PATH&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;
fi

echo “🚀 Using configuration:”
echo “   Source (SRC):      &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$SRC&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;”
echo “   Destination (DST): &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$DST&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;”
echo “   User:              &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$USR&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;”
echo &quot;   SSH key:           &lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;KEY_PATH&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:-&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;default&apos;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;
echo &quot;----------------------------------------------------&quot;

# List of files
FILES=(
    &quot;/etc/kubernetes/pki/ca.crt&quot;
    &quot;/etc/kubernetes/pki/ca.key&quot;
    &quot;/etc/kubernetes/pki/sa.pub&quot;
    &quot;/etc/kubernetes/pki/sa.key&quot;
    &quot;/etc/kubernetes/pki/front-proxy-ca.crt&quot;
    &quot;/etc/kubernetes/pki/front-proxy-ca.key&quot;
    &quot;/etc/kubernetes/pki/apiserver-etcd-client.crt&quot;
    &quot;/etc/kubernetes/pki/apiserver-etcd-client.key&quot;
    &quot;/etc/kubernetes/pki/etcd/ca.crt&quot;
)

# 1. Making the folders
ssh &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$SSH_OPTS&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; -t &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$USR&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;@&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$DST&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &quot;sudo mkdir -p /etc/kubernetes/pki/etcd&quot;

# 2. File transfer
for FILE in &quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;FILES&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[@]&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;; do
    echo &quot;Transfer: &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$FILE&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;
    ssh &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$SSH_OPTS&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$USR&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;@&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$SRC&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &quot;sudo cat &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$FILE&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot; | ssh &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$SSH_OPTS&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$USR&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;@&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$DST&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &quot;sudo tee &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$FILE&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &amp;gt; /dev/null&quot;
done

# 3. Owner and access rights settings
echo &quot;🔒 Change owner to root and set permissions to &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$DST&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;...&quot;
ssh &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$SSH_OPTS&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; -t &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$USR&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;@&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$DST&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &quot; &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
    sudo chown -R root:root /etc/kubernetes/pki &amp;amp;&amp;amp; &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
    sudo chmod 644 /etc/kubernetes/pki/*.crt /etc/kubernetes/pki/sa.pub &amp;amp;&amp;amp; &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
    sudo chmod 600 /etc/kubernetes/pki/*.key /etc/kubernetes/pki/sa.key&quot;

echo &quot;✅ Successfully completed!&quot;
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF

&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;chmod&lt;/span&gt; +x k8s-transit.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Configuring access to etcd (Local Proxy)&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;Since we are using an architecture where each node communicates with etcd through a local proxy (HAProxy), let’s make sure that the proxy manifest is already on the new node.&lt;/p&gt;

    &lt;p&gt;If not, copy the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;etcd-haproxy.yaml&lt;/code&gt; file from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-1&lt;/code&gt; to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/manifests/&lt;/code&gt; directory on the new node. &lt;em&gt;This ensures that as soon as kubelet starts, it will establish a connection to the database.&lt;/em&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Generating the Join command (on the running node)&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;On the running node (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-1&lt;/code&gt;), generate a token and hash. We don’t need a certificate key because we already transferred the files manually.&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# On cp-1&lt;/span&gt;
kubeadm token create &lt;span class=&quot;nt&quot;&gt;--print-join-command&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;&lt;em&gt;As a result, we will get a command that looks like this:&lt;/em&gt; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm join 10.10.0.100:6443 --token &amp;lt;token&amp;gt; --discovery-token-ca-cert-hash &amp;lt;hash&amp;gt;&lt;/code&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Start the join (on the new node)&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;Run the command on the new node, adding the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--control-plane&lt;/code&gt; flag. &lt;strong&gt;⚠️ Do not add&lt;/strong&gt; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--certificate-key&lt;/code&gt;.&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# On the new node&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm &lt;span class=&quot;nb&quot;&gt;join &lt;/span&gt;10.10.0.100:6443 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--token&lt;/span&gt; &amp;lt;your_token&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:&amp;lt;your_hash&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--control-plane&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
   &lt;span class=&quot;nt&quot;&gt;--ignore-preflight-errors&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ExternalEtcdVersion
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;details&gt;
      &lt;summary&gt;&lt;strong&gt;View node joining log&lt;/strong&gt;&lt;/summary&gt;

      &lt;pre&gt;&lt;code class=&quot;language-log&quot;&gt;k8sadmin@cp-4:~$ sudo kubeadm join 10.10.0.100:6443 --token efwfq8.puh8nqqk3a3dqqdx --discovery-token-ca-cert-hash sha256:4c23033729b477d1fc30ae4b4041fe7dae70fa8defd5ecb57c571e969e00f8e0 --control-plane --ignore-preflight-errors=ExternalEtcdVersion
[preflight] Running pre-flight checks
[preflight] Reading configuration from the &quot;kubeadm-config&quot; ConfigMap in namespace &quot;kube-system&quot;...
[preflight] Use &apos;kubeadm init phase upload-config kubeadm --config your-config-file&apos; to re-upload it.
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using &apos;kubeadm config images pull&apos;
[certs] Using certificateDir folder &quot;/etc/kubernetes/pki&quot;
[certs] Generating &quot;front-proxy-client&quot; certificate and key
[certs] Generating &quot;apiserver&quot; certificate and key
[certs] apiserver serving cert is signed for DNS names [cp-4 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.179 10.10.0.100]
[certs] Generating &quot;apiserver-kubelet-client&quot; certificate and key
[certs] Valid certificates and keys now exist in &quot;/etc/kubernetes/pki&quot;
[certs] Using the existing &quot;sa&quot; key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder &quot;/etc/kubernetes&quot;
[kubeconfig] Writing &quot;admin.conf&quot; kubeconfig file
[kubeconfig] Writing &quot;controller-manager.conf&quot; kubeconfig file
[kubeconfig] Writing &quot;scheduler.conf&quot; kubeconfig file
[control-plane] Using manifest folder &quot;/etc/kubernetes/manifests&quot;
[control-plane] Creating static Pod manifest for &quot;kube-apiserver&quot;
[control-plane] Creating static Pod manifest for &quot;kube-controller-manager&quot;
[control-plane] Creating static Pod manifest for &quot;kube-scheduler&quot;
[check-etcd] Skipping etcd check in external mode
[kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/instance-config.yaml&quot;
[patches] Applied patch of type &quot;application/strategic-merge-patch+json&quot; to target &quot;kubeletconfiguration&quot;
[kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/config.yaml&quot;
[kubelet-start] Writing kubelet environment file with flags to file &quot;/var/lib/kubelet/kubeadm-flags.env&quot;
[kubelet-start] Starting the kubelet
[control-plane-join] Using external etcd - no local stacked instance added
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 503.676345ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
[mark-control-plane] Marking the node cp-4 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node cp-4 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.2.179:6443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is healthy after 9.840373ms
[control-plane-check] kube-controller-manager is healthy after 11.326085ms
[control-plane-check] kube-apiserver is healthy after 21.681901ms

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.


To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run &apos;kubectl get nodes&apos; to see this node join the cluster.
&lt;/code&gt;&lt;/pre&gt;

    &lt;/details&gt;
    &lt;p&gt;&lt;br /&gt;&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you have &lt;strong&gt;Stacked Etcd&lt;/strong&gt;, the following command will be sufficient to obtain new instructions for joining nodes to the cluster. In this architecture, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm&lt;/code&gt; fully automates the process: it downloads all the necessary certificates (including those required to create a new etcd member node) into the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm-certs&lt;/code&gt; secret.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# On node cp-1&lt;/span&gt;
ssh &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; ~/.ssh/k8s_cluster_key k8sadmin@10.10.0.21
&lt;span class=&quot;nb&quot;&gt;sudo&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-sE&lt;/span&gt;
&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;kubeadm token create &lt;span class=&quot;nt&quot;&gt;--print-join-command&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--control-plane&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--certificate-key&lt;/span&gt; &lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;kubeadm init phase upload-certs &lt;span class=&quot;nt&quot;&gt;--upload-certs&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;tail&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-1&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This works for Stacked Etcd:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;upload-certs&lt;/code&gt;: Unlike External Etcd, in Stacked mode &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm&lt;/code&gt; knows about all local etcd certificates and packs them into a bundle.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--certificate-key&lt;/code&gt;: You get a decryption key that allows the new node to automatically download this package to its &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/&lt;/code&gt; directory.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Automation&lt;/strong&gt;: The new node does not need to copy anything manually — it will create both Kubernetes certificates and a local copy of etcd, joining it to the existing participants.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This command works great, but: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm init phase upload-certs --upload-certs&lt;/code&gt; uploads certificates to the secret for only &lt;strong&gt;2 hours&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you plan to add nodes later, you will have to run this command again. If you are doing this “here and now,” this is the ideal and fastest way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: If there have already been failed attempts to join on the new node, be sure to run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sudo kubeadm reset -f&lt;/code&gt; before running the new command.&lt;/p&gt;

&lt;h3 id=&quot;disconnecting-and-replacing-a-control-panel-node&quot;&gt;Disconnecting and replacing a control panel node&lt;/h3&gt;

&lt;p&gt;Removing a control panel node if it fails or to replace it has a significant advantage in an architecture with &lt;strong&gt;External Etcd&lt;/strong&gt;: since the database is outside the Kubernetes nodes, you only risk API availability, not data integrity.&lt;/p&gt;

&lt;p&gt;To remove a control plane node (for example, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-2&lt;/code&gt;) from the cluster, follow these steps.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Prepare the cluster&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;Perform these steps on another healthy node (for example, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-1&lt;/code&gt;) or communicate with the control panel via the HAProxy balancer address.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;&lt;strong&gt;Remove the load from the node&lt;/strong&gt;: Allow current tasks to complete and prevent new pods from being assigned&lt;/li&gt;
    &lt;/ul&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl cordon cp-2
kubectl drain cp-2 &lt;span class=&quot;nt&quot;&gt;--ignore-daemonsets&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--delete-emptydir-data&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;ul&gt;
      &lt;li&gt;&lt;strong&gt;Remove the node from the Kubernetes registry&lt;/strong&gt;: This will delete the Node object and associated certificates (if auto-rotation is used).&lt;/li&gt;
    &lt;/ul&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl delete node cp-2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Cleaning up the node itself (on cp-2)&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;Now go to the node you want to disconnect.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;
        &lt;p&gt;&lt;strong&gt;Resetting kubeadm settings&lt;/strong&gt;: This will remove static pod manifests (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kube-apiserver&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;controller-manager&lt;/code&gt;, etc.) and configuration files in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes&lt;/code&gt;.&lt;/p&gt;

        &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm reset &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;&lt;strong&gt;Clearing IPVS/Iptables:&lt;/strong&gt;&lt;/p&gt;

        &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;iptables &lt;span class=&quot;nt&quot;&gt;-F&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;iptables &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; nat &lt;span class=&quot;nt&quot;&gt;-F&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;iptables &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; mangle &lt;span class=&quot;nt&quot;&gt;-F&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;iptables &lt;span class=&quot;nt&quot;&gt;-X&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# If you used IPVS:&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;ipvsadm &lt;span class=&quot;nt&quot;&gt;--clear&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;&lt;strong&gt;Removing residual files&lt;/strong&gt;: It is recommended to delete the certificate folder to avoid conflicts when reconnecting.&lt;/p&gt;

        &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-rf&lt;/span&gt; /etc/kubernetes/pki
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Cleaning up Etcd&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;This is where the main difference between the architectures comes into play.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;For External Etcd&lt;/strong&gt;:&lt;/p&gt;

    &lt;p&gt;Since etcd runs separately, &lt;strong&gt;nothing needs to be done&lt;/strong&gt;. The external etcd cluster doesn’t even know that the API server on &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-2&lt;/code&gt; has stopped working. You have simply removed one of the database clients.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;For Stacked Etcd:&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;This is the most important step. Since etcd was running locally on &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-2&lt;/code&gt;, it was part of the quorum. If you simply shut down the node, the etcd cluster will consider it “down” and wait for it to return, which can negatively affect the quorum.&lt;/p&gt;

    &lt;ol&gt;
      &lt;li&gt;You need to log in to another master node.&lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Find the etcd member ID for cp-2:&lt;/p&gt;

        &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; kube-system etcd-cp-1 &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; etcdctl &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--endpoints&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;https://127.0.0.1:2379 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cacert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/kubernetes/pki/etcd/ca.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/kubernetes/pki/etcd/server.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--key&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/kubernetes/pki/etcd/server.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  member list
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Remove this ID:&lt;/p&gt;

        &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; kube-system etcd-cp-1 &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; etcdctl &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--endpoints&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;https://127.0.0.1:2379 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cacert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/kubernetes/pki/etcd/ca.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/kubernetes/pki/etcd/server.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--key&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/kubernetes/pki/etcd/server.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  member remove &amp;lt;MEMBER_ID&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Load Balancer Update&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;If you have an external balancer in front of your control panel (e.g., HAProxy or cloud LB), be sure to &lt;strong&gt;remove the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-2&lt;/code&gt; IP address from the backend configuration&lt;/strong&gt;.&lt;/p&gt;

    &lt;p&gt;If you are using local HAProxy on each node (as we configured earlier), the other nodes will simply stop referring to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-2&lt;/code&gt;, but the configuration on the worker nodes should be updated so that they do not waste time trying to connect to the missing control panel node.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Replacing a node (if necessary)&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;If you want to replace &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-2&lt;/code&gt; with a new node with the same name:&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
  &lt;li&gt;Prepare a new OS.&lt;/li&gt;
  &lt;li&gt;Follow the steps from the previous instructions (copy 9 certificate files + proxy manifest).&lt;/li&gt;
  &lt;li&gt;Run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm join&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;deploying-worker-nodes&quot;&gt;Deploying Worker Nodes&lt;/h2&gt;

&lt;p&gt;To deploy worker nodes, we will use the settings from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-config.yaml&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-user.yaml&lt;/code&gt;, and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;snipets/cloud-init-base.yaml&lt;/code&gt;. During the file merging process, we will remove &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl&lt;/code&gt; from the list of packages to install. For worker nodes, we will use IP addresses from the range 10.10.0.30 — 10.10.0.99.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;VM_IP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;10.10.0.31/24&quot;&lt;/span&gt;

multipass launch &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; wn-1 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cpus&lt;/span&gt; 2 &lt;span class=&quot;nt&quot;&gt;--memory&lt;/span&gt; 2G &lt;span class=&quot;nt&quot;&gt;--disk&lt;/span&gt; 10G &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--network&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;en0,mode&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;manual &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cloud-init&lt;/span&gt; &amp;lt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt; yq eval-all &lt;span class=&quot;s1&quot;&gt;&apos;
      # Merge all files into a single object
      . as $item ireduce ({}; . *+ $item) |

      # Removing kubectl from the package list
      del(.packages[] | select(. == &quot;kubectl&quot;)) |

      # Update network configuration
      with(.write_files[] | select(.path == &quot;/etc/netplan/60-static-ip.yaml&quot;);
        .content |= (
          from_yaml |
          .network.ethernets.enp0s2.addresses += [strenv(VM_IP)] |
          to_yaml
        )
      ) &apos;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-config.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-user.yaml &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      snipets/cloud-init-base.yaml &lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Once the virtual machine has been provisioned, we will log into it via ssh.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ssh &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; ~/.ssh/k8s_cluster_key k8sadmin@10.10.0.31
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And let’s join it to the cluster using the command we received during the initialization of the control panel:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm &lt;span class=&quot;nb&quot;&gt;join &lt;/span&gt;10.10.0.100:6443 &lt;span class=&quot;nt&quot;&gt;--token&lt;/span&gt; jfugsz.51c9pp44fwifcu94 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:736ca5cf47fca3f4c1fd9c414f6ba70e4fefdb2f52deec5e2526f5ebccf838d6
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The Calico operator (tigera-operator) will register the node in the event network, and in a minute or two, the status of the working node will change to READY. The node is ready to receive a workload.&lt;/p&gt;

&lt;h2 id=&quot;installing-metallb&quot;&gt;Installing MetalLB&lt;/h2&gt;

&lt;p&gt;For instructions on installing MetalLB to balance access to worker nodes, refer to the article “&lt;a href=&quot;https://blog.andygol.co.ua/en/2025/12/12/k8s-cluster-with-kubeadm/#step-7-install-metallb&quot;&gt;Deploying a Kubernetes cluster on a local computer: A Complete Step-by-Step Guide&lt;/a&gt;”.&lt;/p&gt;

&lt;p&gt;Your cluster is now ready to accept workloads.&lt;/p&gt;

&lt;h2 id=&quot;testing-the-cluster&quot;&gt;Testing the cluster&lt;/h2&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;multipass &lt;span class=&quot;nb&quot;&gt;exec &lt;/span&gt;cp1 &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; kubectl get nodes
multipass &lt;span class=&quot;nb&quot;&gt;exec &lt;/span&gt;cp1 &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; kubectl get pods &lt;span class=&quot;nt&quot;&gt;-A&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;p&gt;This guide demonstrates a complete deployment of an HA Kubernetes cluster with an external etcd topology. The cluster is ready to use and can be easily scaled. All steps follow the DRY principle and can be adapted to different environments.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;appendices&quot;&gt;Appendices&lt;/h2&gt;

&lt;h3 id=&quot;network-architecture-for-ha-kubernetes-with-external-etcd&quot;&gt;Network architecture for HA Kubernetes with external etcd&lt;/h3&gt;

&lt;details&gt;
  &lt;summary&gt;&lt;strong&gt;Look…&lt;/strong&gt;&lt;/summary&gt;

  &lt;h4 id=&quot;recommended-network-topology--subnet-10100022-1024-addresses&quot;&gt;Recommended network topology — Subnet: 10.10.0.0/22 ​​(1024 addresses)&lt;/h4&gt;

  &lt;p&gt;This subnet is large enough for expansion and is logically divided into blocks:&lt;/p&gt;

  &lt;div class=&quot;language-text highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;┌─────────────────────────────────────────────────────────┐
│ 10.10.0.0/22      - Kubernetes home subnet.             │
├─────────────────────────────────────────────────────────┤
│ 10.10.0.0/26      - Infrastructure (64 addresses)       │
│   10.10.0.1       - Gateway (macOS host)                │
│   10.10.0.2       - DNS (optional)                      │
│   10.10.0.10      - HAProxy/Load Balancer               │
│   10.10.0.11-20   - Infrastructure reserve.             │
├─────────────────────────────────────────────────────────┤
│ 10.10.0.64/26     - etcd cluster (64 addresses)         │
│   10.10.0.65      - etcd-1                              │
│   10.10.0.66      - etcd-2                              │
│   10.10.0.67      - etcd-3                              │
│   10.10.0.68-70   - Reserve for additional etcd         │
├─────────────────────────────────────────────────────────┤
│ 10.10.0.128/25    - Control Plane (128 addresses)       │
│   10.10.0.129     - k8s-master-1                        │
│   10.10.0.130     - k8s-master-2                        │
│   10.10.0.131     - k8s-master-3                        │
│   10.10.0.132-140 - Reserve for additional masters      │
├─────────────────────────────────────────────────────────┤
│ 10.10.1.0/24      - Worker Nodes (256 addresses)        │
│   10.10.1.10      - k8s-worker-1                        │
│   10.10.1.11      - k8s-worker-2                        │
│   10.10.1.12      - k8s-worker-3                        │
│   10.10.1.13      - k8s-worker-4                        │
│   10.10.1.14      - k8s-worker-5                        │
│   10.10.1.15-100  - Reserve for additional workers      │
├─────────────────────────────────────────────────────────┤
│ 10.10.2.0/24   - Pod Network (256 addresses) - optional │
│   Used for testing.                                     │
├─────────────────────────────────────────────────────────┤
│ 10.10.3.0/24   - Service Network - optional             │
│   Used for testing.                                     │
└─────────────────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;  &lt;/div&gt;

  &lt;p&gt;&lt;strong&gt;Detailed IP address plan&lt;/strong&gt;&lt;/p&gt;

  &lt;h4 id=&quot;infrastructure-10100026&quot;&gt;1. Infrastructure (10.10.0.0/26)&lt;/h4&gt;

  &lt;table&gt;
    &lt;thead&gt;
      &lt;tr&gt;
        &lt;th&gt;IP address&lt;/th&gt;
        &lt;th&gt;Hostname&lt;/th&gt;
        &lt;th&gt;Role&lt;/th&gt;
      &lt;/tr&gt;
    &lt;/thead&gt;
    &lt;tbody&gt;
      &lt;tr&gt;
        &lt;td&gt;10.10.0.1&lt;/td&gt;
        &lt;td&gt;macOS-host&lt;/td&gt;
        &lt;td&gt;Gateway/Bridge&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;10.10.0.10&lt;/td&gt;
        &lt;td&gt;haproxy-lb&lt;/td&gt;
        &lt;td&gt;Load Balancer (optional)&lt;/td&gt;
      &lt;/tr&gt;
    &lt;/tbody&gt;
  &lt;/table&gt;

  &lt;h4 id=&quot;etcd-cluster-101006426&quot;&gt;2. etcd Cluster (10.10.0.64/26)&lt;/h4&gt;

  &lt;table&gt;
    &lt;thead&gt;
      &lt;tr&gt;
        &lt;th&gt;IP address&lt;/th&gt;
        &lt;th&gt;Hostname&lt;/th&gt;
        &lt;th&gt;Role&lt;/th&gt;
        &lt;th&gt;vCPU&lt;/th&gt;
        &lt;th&gt;RAM&lt;/th&gt;
        &lt;th&gt;Disk&lt;/th&gt;
      &lt;/tr&gt;
    &lt;/thead&gt;
    &lt;tbody&gt;
      &lt;tr&gt;
        &lt;td&gt;10.10.0.65&lt;/td&gt;
        &lt;td&gt;etcd-1&lt;/td&gt;
        &lt;td&gt;etcd member&lt;/td&gt;
        &lt;td&gt;2&lt;/td&gt;
        &lt;td&gt;2GB&lt;/td&gt;
        &lt;td&gt;20GB&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;10.10.0.66&lt;/td&gt;
        &lt;td&gt;etcd-2&lt;/td&gt;
        &lt;td&gt;etcd member&lt;/td&gt;
        &lt;td&gt;2&lt;/td&gt;
        &lt;td&gt;2GB&lt;/td&gt;
        &lt;td&gt;20GB&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;10.10.0.67&lt;/td&gt;
        &lt;td&gt;etcd-3&lt;/td&gt;
        &lt;td&gt;etcd member&lt;/td&gt;
        &lt;td&gt;2&lt;/td&gt;
        &lt;td&gt;2GB&lt;/td&gt;
        &lt;td&gt;20GB&lt;/td&gt;
      &lt;/tr&gt;
    &lt;/tbody&gt;
  &lt;/table&gt;

  &lt;p&gt;&lt;strong&gt;Minimum etcd configuration:&lt;/strong&gt; 3 nodes for quorum&lt;/p&gt;

  &lt;h4 id=&quot;control-plane-1010012825&quot;&gt;3. Control Plane (10.10.0.128/25)&lt;/h4&gt;

  &lt;table&gt;
    &lt;thead&gt;
      &lt;tr&gt;
        &lt;th&gt;IP address&lt;/th&gt;
        &lt;th&gt;Hostname&lt;/th&gt;
        &lt;th&gt;Role&lt;/th&gt;
        &lt;th&gt;vCPU&lt;/th&gt;
        &lt;th&gt;RAM&lt;/th&gt;
        &lt;th&gt;Disk&lt;/th&gt;
      &lt;/tr&gt;
    &lt;/thead&gt;
    &lt;tbody&gt;
      &lt;tr&gt;
        &lt;td&gt;10.10.0.129&lt;/td&gt;
        &lt;td&gt;k8s-master-1&lt;/td&gt;
        &lt;td&gt;Control Plane (Primary)&lt;/td&gt;
        &lt;td&gt;2&lt;/td&gt;
        &lt;td&gt;4GB&lt;/td&gt;
        &lt;td&gt;40GB&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;10.10.0.130&lt;/td&gt;
        &lt;td&gt;k8s-master-2&lt;/td&gt;
        &lt;td&gt;Control Plane&lt;/td&gt;
        &lt;td&gt;2&lt;/td&gt;
        &lt;td&gt;4GB&lt;/td&gt;
        &lt;td&gt;40GB&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;10.10.0.131&lt;/td&gt;
        &lt;td&gt;k8s-master-3&lt;/td&gt;
        &lt;td&gt;Control Plane&lt;/td&gt;
        &lt;td&gt;2&lt;/td&gt;
        &lt;td&gt;4GB&lt;/td&gt;
        &lt;td&gt;40GB&lt;/td&gt;
      &lt;/tr&gt;
    &lt;/tbody&gt;
  &lt;/table&gt;

  &lt;p&gt;&lt;strong&gt;Minimum HA configuration:&lt;/strong&gt; 3 master nodes&lt;/p&gt;

  &lt;h4 id=&quot;worker-nodes-10101024&quot;&gt;4. Worker Nodes (10.10.1.0/24)&lt;/h4&gt;

  &lt;table&gt;
    &lt;thead&gt;
      &lt;tr&gt;
        &lt;th&gt;IP address&lt;/th&gt;
        &lt;th&gt;Hostname&lt;/th&gt;
        &lt;th&gt;Role&lt;/th&gt;
        &lt;th&gt;vCPU&lt;/th&gt;
        &lt;th&gt;RAM&lt;/th&gt;
        &lt;th&gt;Disk&lt;/th&gt;
      &lt;/tr&gt;
    &lt;/thead&gt;
    &lt;tbody&gt;
      &lt;tr&gt;
        &lt;td&gt;10.10.1.10&lt;/td&gt;
        &lt;td&gt;k8s-worker-1&lt;/td&gt;
        &lt;td&gt;Worker Node&lt;/td&gt;
        &lt;td&gt;2&lt;/td&gt;
        &lt;td&gt;4GB&lt;/td&gt;
        &lt;td&gt;50GB&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;10.10.1.11&lt;/td&gt;
        &lt;td&gt;k8s-worker-2&lt;/td&gt;
        &lt;td&gt;Worker Node&lt;/td&gt;
        &lt;td&gt;2&lt;/td&gt;
        &lt;td&gt;4GB&lt;/td&gt;
        &lt;td&gt;50GB&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;10.10.1.12&lt;/td&gt;
        &lt;td&gt;k8s-worker-3&lt;/td&gt;
        &lt;td&gt;Worker Node&lt;/td&gt;
        &lt;td&gt;2&lt;/td&gt;
        &lt;td&gt;4GB&lt;/td&gt;
        &lt;td&gt;50GB&lt;/td&gt;
      &lt;/tr&gt;
    &lt;/tbody&gt;
  &lt;/table&gt;

  &lt;p&gt;&lt;strong&gt;Recommendation:&lt;/strong&gt; Minimum 2-3 workers for testing, can scale to 100+&lt;/p&gt;

  &lt;h4 id=&quot;kubernetes-internal-networks-do-not-conflict-with-vms&quot;&gt;Kubernetes internal networks (do not conflict with VMs)&lt;/h4&gt;

  &lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c1&quot;&gt;# Pod Network (CNI - Calico/Flannel/Weave)&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;--pod-network-cidr=192.168.0.0/16&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;# Service Network&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;--service-cidr=172.16.0.0/16&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;# Cluster DNS&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;--cluster-dns=172.16.0.10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;  &lt;/div&gt;

  &lt;p&gt;These networks are &lt;strong&gt;virtual&lt;/strong&gt; and do not conflict with VM addresses 10.10.x.x&lt;/p&gt;

  &lt;h3 id=&quot;alternative-subnet-options&quot;&gt;Alternative subnet options&lt;/h3&gt;

  &lt;h4 id=&quot;option-1-compact-topology-101010024&quot;&gt;Option 1: Compact topology (10.10.10.0/24)&lt;/h4&gt;

  &lt;p&gt;For a small cluster (up to 10 nodes):&lt;/p&gt;

  &lt;div class=&quot;language-text highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;10.10.10.0/24 - Main subnet (256 addresses)
├─ 10.10.10.1      - Gateway
├─ 10.10.10.10-12  - etcd (3 nodes)
├─ 10.10.10.20-22  - Control Plane (3 masters)
└─ 10.10.10.30-50  - Workers (up to 20 workers)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;  &lt;/div&gt;

  &lt;h4 id=&quot;option-2-extended-topology-10100016&quot;&gt;Option 2: Extended topology (10.10.0.0/16)&lt;/h4&gt;

  &lt;p&gt;For a large production cluster:&lt;/p&gt;

  &lt;div class=&quot;language-text highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;10.10.0.0/16 - All subnet (65536 addresses)
├─ 10.10.1.0/24   - Infrastructure
├─ 10.10.10.0/24  - etcd cluster
├─ 10.10.20.0/24  - Control Plane
├─ 10.10.30.0/22  - Workers (1024 addresses)
└─ 10.10.40.0/22  - Reserve for expansion
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;  &lt;/div&gt;

  &lt;h4 id=&quot;option-3-classic-private-network-172160016&quot;&gt;Option 3: Classic private network (172.16.0.0/16)&lt;/h4&gt;

  &lt;p&gt;Alternative if 10.10.x.x is already taken:&lt;/p&gt;

  &lt;div class=&quot;language-text highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;172.16.0.0/16
├─ 172.16.1.0/24   - Infrastructure + etcd
├─ 172.16.2.0/24   - Control Plane
└─ 172.16.10.0/23  - Workers (512 addresses)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;  &lt;/div&gt;

  &lt;h3 id=&quot;dns-records-recommended&quot;&gt;DNS records (recommended)&lt;/h3&gt;

  &lt;p&gt;Add to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/hosts&lt;/code&gt; on macOS:&lt;/p&gt;

  &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Kubernetes HA Cluster&lt;/span&gt;
10.10.0.1       gateway.k8s.local
10.10.0.10      lb.k8s.local haproxy.k8s.local

&lt;span class=&quot;c&quot;&gt;# etcd cluster&lt;/span&gt;
10.10.0.65      etcd-1.k8s.local
10.10.0.66      etcd-2.k8s.local
10.10.0.67      etcd-3.k8s.local

&lt;span class=&quot;c&quot;&gt;# Control Plane&lt;/span&gt;
10.10.0.129     master-1.k8s.local k8s-master-1
10.10.0.130     master-2.k8s.local k8s-master-2
10.10.0.131     master-3.k8s.local k8s-master-3

&lt;span class=&quot;c&quot;&gt;# Workers&lt;/span&gt;
10.10.1.10      worker-1.k8s.local k8s-worker-1
10.10.1.11      worker-2.k8s.local k8s-worker-2
10.10.1.12      worker-3.k8s.local k8s-worker-3

&lt;span class=&quot;c&quot;&gt;# API endpoint (points to LB or any master)&lt;/span&gt;
10.10.0.10      api.k8s.local kubernetes.k8s.local
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;  &lt;/div&gt;

&lt;/details&gt;
&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;troubleshooting&quot;&gt;Troubleshooting&lt;/h2&gt;

&lt;details&gt;
  &lt;summary&gt;&lt;strong&gt;Look…&lt;/strong&gt;&lt;/summary&gt;

  &lt;h3 id=&quot;fix---pod-network-cidr&quot;&gt;Fix &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--pod-network-cidr&lt;/code&gt;&lt;/h3&gt;

  &lt;p&gt;Q: After installing CNI plugin (Flannel), I noticed that I forgot to use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--pod-network-cidr=10.244.0.0/16&lt;/code&gt; when initializing an HA cluster with three control planes. How can I fix this without deleting the cluster?&lt;/p&gt;

  &lt;details&gt;
    &lt;summary&gt;&lt;strong&gt;Answer&lt;/strong&gt;&lt;/summary&gt;

    &lt;p&gt;To fix the CIDR of the Pod network in a running HA cluster (3 control planes) without completely reinstalling it, you need to update the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kube-apiserver&lt;/code&gt; configuration, the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm&lt;/code&gt; settings in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ClusterConfiguration&lt;/code&gt;, the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kube-proxy&lt;/code&gt; settings, and the configuration of Flannel itself.&lt;/p&gt;

    &lt;h4 id=&quot;update-the-clusterconfiguration&quot;&gt;1. Update the ClusterConfiguration&lt;/h4&gt;

    &lt;p&gt;Kubernetes stores the initialization settings in the ConfigMap. These need to be changed so that future operations (such as joining new nodes) use the correct CIDR.&lt;/p&gt;

    &lt;ol&gt;
      &lt;li&gt;
        &lt;p&gt;Увше ConfigMap:&lt;/p&gt;

        &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl edit configmap kubeadm-config &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; kube-system
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Find the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;networking&lt;/code&gt; section and change or add the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;podSubnet&lt;/code&gt; parameter:&lt;/p&gt;

        &lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;networking&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;podSubnet&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;10.244.0.0/16&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
    &lt;/ol&gt;

    &lt;h4 id=&quot;update-control-plane-manifests-on-all-3-nodes&quot;&gt;2. Update Control Plane Manifests (on all 3 nodes)&lt;/h4&gt;

    &lt;p&gt;On each of the three control plane nodes, you need to update the static manifest &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kube-controller-manager&lt;/code&gt; so that it correctly allocates IP addresses to the nodes.&lt;/p&gt;

    &lt;ol&gt;
      &lt;li&gt;Open the file: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sudo vim /etc/kubernetes/manifests/kube-controller-manager.yaml&lt;/code&gt;&lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Find the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--cluster-cidr&lt;/code&gt; flag and set it:&lt;/p&gt;

        &lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;--cluster-cidr=10.244.0.0/16&lt;/span&gt;
&lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;--allocate-node-cidrs=true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
      &lt;li&gt;After saving the file, Kubelet will automatically restart the component. Repeat this on &lt;strong&gt;all three&lt;/strong&gt; master nodes.&lt;/li&gt;
    &lt;/ol&gt;

    &lt;h4 id=&quot;update-kube-proxy-settings&quot;&gt;3. Update kube-proxy settings&lt;/h4&gt;

    &lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kube-proxy&lt;/code&gt; uses CIDR to configure IPTables/IPVS rules.&lt;/p&gt;

    &lt;ol&gt;
      &lt;li&gt;
        &lt;p&gt;Edit the ConfigMap for kube-proxy:&lt;/p&gt;

        &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl edit configmap kube-proxy &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; kube-system
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Find the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;clusterCIDR&lt;/code&gt; parameter and change it:&lt;/p&gt;

        &lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;clusterCIDR&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;10.244.0.0/16&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Restart all kube-proxy Pods to apply the changes:&lt;/p&gt;

        &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl rollout restart daemonset kube-proxy &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; kube-system
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;

        &lt;p&gt;Everything should work now. If it doesn’t, skip to the next steps.&lt;/p&gt;
      &lt;/li&gt;
    &lt;/ol&gt;

    &lt;h4 id=&quot;reinstall-flannel-with-the-correct-cidr&quot;&gt;4. Reinstall Flannel with the correct CIDR&lt;/h4&gt;

    &lt;p&gt;Since Flannel is already installed with incorrect (or no) settings, it needs to be reconfigured.&lt;/p&gt;

    &lt;ol&gt;
      &lt;li&gt;
        &lt;p&gt;Download the latest Flannel manifest from the official repository:&lt;/p&gt;

        &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;curl &lt;span class=&quot;nt&quot;&gt;-sLO&lt;/span&gt; https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Check the net-conf.json section in the downloaded file. It should contain:&lt;/p&gt;

        &lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Network&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;10.244.0.0/16&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Remove the old Flannel and install the updated one:&lt;/p&gt;

        &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl delete &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; kube-flannel.yml &lt;span class=&quot;c&quot;&gt;# if you have the old file&lt;/span&gt;
kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; kube-flannel.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
    &lt;/ol&gt;

    &lt;h4 id=&quot;update-node-specs-podcidr&quot;&gt;5. Update Node Specs (PodCIDR)&lt;/h4&gt;

    &lt;p&gt;This is the most critical step. Each node in the cluster already has an old (or empty) &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;podCIDR&lt;/code&gt; assigned to it in its properties. These need to be updated manually.&lt;/p&gt;

    &lt;p&gt;For each node (master and worker), run:&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl patch node &amp;lt;node-name&amp;gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;{&quot;spec&quot;:{&quot;podCIDR&quot;:&quot;10.244.X.0/24&quot;, &quot;podCIDRs&quot;:[&quot;10.244.X.0/24&quot;]}}&apos;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;&lt;em&gt;Where X is a unique number for each node (e.g. 0, 1, 2).&lt;/em&gt;&lt;/p&gt;

    &lt;h4 id=&quot;final-reboot&quot;&gt;6. Final Reboot&lt;/h4&gt;

    &lt;p&gt;For the changes to take effect and Pods to receive addresses from the new range:&lt;/p&gt;

    &lt;ol&gt;
      &lt;li&gt;&lt;strong&gt;Reboot the nodes&lt;/strong&gt; one by one or delete all existing Pods in all namespaces so that they are recreated with the new IPs.&lt;/li&gt;
      &lt;li&gt;Check the network status:&lt;/li&gt;
    &lt;/ol&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get nodes &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;jsonpath&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;{.items[*].spec.podCIDR}&apos;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: If you already have workloads running, their IP addresses will not change automatically until the Pods are recreated. On an HA cluster, this process can be done one by one for each node (cordon -&amp;gt; drain -&amp;gt; reboot -&amp;gt; uncordon) to avoid downtime.&lt;/p&gt;

  &lt;/details&gt;

  &lt;hr /&gt;

  &lt;h3 id=&quot;resetting-etcd-to-its-original-state&quot;&gt;Resetting etcd to its original state&lt;/h3&gt;

  &lt;p&gt;Q: How do I clear the data in a cluster? Reset it to its original state?&lt;/p&gt;

  &lt;details&gt;
    &lt;summary&gt;&lt;strong&gt;Answer&lt;/strong&gt;&lt;/summary&gt;

    &lt;p&gt;To reset etcd to its “original” (empty) state, you need to completely delete its data on each node. Since etcd is a critical component of Kubernetes, this process will effectively destroy all of your cluster configuration (Deployment, Services, Secrets, etc.).&lt;/p&gt;

    &lt;p&gt;Step-by-step instructions for a cluster deployed via kubeadm:&lt;/p&gt;

    &lt;h4 id=&quot;stop-services-that-depend-on-etcd&quot;&gt;1. Stop services that depend on etcd&lt;/h4&gt;

    &lt;p&gt;On each master node (control-plane), you need to temporarily remove the manifests of static pods so that Kubernetes stops trying to restart etcd and the API server:&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo mv&lt;/span&gt; /etc/kubernetes/manifests/&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;.yaml /tmp/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;Wait a few seconds for the containers to stop. You can check via &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;crictl ps&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker ps&lt;/code&gt;.&lt;/p&gt;

    &lt;h4 id=&quot;clear-data-on-all-nodes&quot;&gt;2. Clear data on all nodes&lt;/h4&gt;

    &lt;p&gt;Delete the contents of the etcd data directory (usually &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/var/lib/etcd&lt;/code&gt;). This needs to be done on all nodes where etcd was running:&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-rf&lt;/span&gt; /var/lib/etcd/&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;h4 id=&quot;initialize-a-new-empty-cluster&quot;&gt;3. Initialize a new “empty” cluster&lt;/h4&gt;

    &lt;p&gt;Now you need to get etcd to start as a new cluster.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;&lt;strong&gt;Option A (Full cluster reset via kubeadm):&lt;/strong&gt;&lt;/li&gt;
    &lt;/ul&gt;

    &lt;p&gt;If you want to start from scratch (including certificates and K8s configuration), run on each node:&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm reset &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-rf&lt;/span&gt; /etc/cni/net.d
&lt;span class=&quot;c&quot;&gt;# After that, run &apos;kubeadm init&apos; again on the first node&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;ul&gt;
      &lt;li&gt;&lt;strong&gt;Option B (Only clean etcd data without reinstalling K8s):&lt;/strong&gt;&lt;/li&gt;
    &lt;/ul&gt;

    &lt;p&gt;If the certificates in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/kubernetes/pki/etcd/&lt;/code&gt; are still valid, you can try to recreate only the database. Revert the manifests:&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo mv&lt;/span&gt; /tmp/etcd.yaml /etc/kubernetes/manifests/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;&lt;em&gt;Important&lt;/em&gt;: If &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--initial-cluster-state=existing&lt;/code&gt; is specified in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;etcd.yaml&lt;/code&gt;, you may need to manually change it to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;new&lt;/code&gt; or use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm init phase etcd local&lt;/code&gt; to generate a correct manifest for the new initialization.&lt;/p&gt;

    &lt;h4 id=&quot;cleanup-via-etcdctl-if-cluster-is-running&quot;&gt;4. Cleanup via etcdctl (if cluster is running)&lt;/h4&gt;

    &lt;p&gt;If the cluster is “alive” but you just want to delete all data (keys) without stopping the services:&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;etcdctl &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--endpoints&lt;/span&gt; https://127.0.0.1:2379 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cacert&lt;/span&gt; /etc/kubernetes/pki/etcd/ca.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cert&lt;/span&gt; /etc/kubernetes/pki/etcd/peer.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--key&lt;/span&gt; /etc/kubernetes/pki/etcd/peer.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  del &lt;span class=&quot;s2&quot;&gt;&quot;&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--prefix&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;This will delete all Kubernetes objects but will preserve the etcd cluster structure itself.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;Warning&lt;/strong&gt;: After a complete cleanup of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/var/lib/etcd&lt;/code&gt; data, the API server will not be able to start until you reinitialize it via &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm init&lt;/code&gt;.&lt;/p&gt;

  &lt;/details&gt;

  &lt;hr /&gt;

  &lt;h3 id=&quot;kubeconfig&quot;&gt;Kubeconfig&lt;/h3&gt;

  &lt;p&gt;Q: How do I properly migrate kubeconfig to localhost and add the cluster to the list of other clusters?&lt;/p&gt;

  &lt;details&gt;
    &lt;summary&gt;&lt;strong&gt;Answer&lt;/strong&gt;&lt;/summary&gt;

    &lt;p&gt;To properly migrate kubeconfig and merge it with your existing clusters, it is best to avoid manually editing the main configuration file, as this often leads to YAML indentation errors.&lt;/p&gt;

    &lt;h4 id=&quot;upload-the-file-to-localhost&quot;&gt;1. Upload the file to localhost&lt;/h4&gt;

    &lt;p&gt;Use ssh to read the file directly into a new local file. Replace 10.10.0.21 with the IP of your cp-1.&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# On your local machine&lt;/span&gt;
ssh &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; ~/.ssh/k8s_cluster_key k8sadmin@10.10.0.21 &lt;span class=&quot;s2&quot;&gt;&quot;sudo cat /etc/kubernetes/admin.conf&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; kubeconfig-new.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;h4 id=&quot;clean-up-the-configuration-important&quot;&gt;2. Clean up the configuration (Important)&lt;/h4&gt;

    &lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;admin.conf&lt;/code&gt; file generated by &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm&lt;/code&gt; usually has standard names (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubernetes-admin&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubernetes&lt;/code&gt;). If you have multiple clusters, these names will conflict.&lt;/p&gt;

    &lt;p&gt;Open &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeconfig-new.yaml&lt;/code&gt; in an editor and rename the key fields. For example, if your cluster is called &lt;strong&gt;prod-cluster&lt;/strong&gt;:&lt;/p&gt;

    &lt;ol&gt;
      &lt;li&gt;
        &lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;clusters.name&lt;/code&gt;: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubernetes&lt;/code&gt; ➡️ &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;prod-cluster&lt;/code&gt;&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;contexts.name&lt;/code&gt;: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubernetes-admin@kubernetes&lt;/code&gt; ➡️ &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;admin@prod-cluster&lt;/code&gt;&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;contexts.context.cluster&lt;/code&gt;: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubernetes&lt;/code&gt; ➡️ &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;prod-cluster&lt;/code&gt;&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;contexts.context.user&lt;/code&gt;: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubernetes-admin&lt;/code&gt; ➡️ &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;admin-prod&lt;/code&gt;&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;users.name&lt;/code&gt;: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubernetes-admin&lt;/code&gt; ➡️ &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;admin-prod&lt;/code&gt;&lt;/p&gt;
      &lt;/li&gt;
    &lt;/ol&gt;

    &lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: If you are using a Load Balancer (10.10.0.100), make sure that the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;server&lt;/code&gt; field specifies this IP address, not the internal IP &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp-1&lt;/code&gt;.&lt;/p&gt;

    &lt;h4 id=&quot;merge-with-the-main-kubeconfig&quot;&gt;3. Merge with the main kubeconfig&lt;/h4&gt;

    &lt;p&gt;Instead of copying text, use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl&lt;/code&gt; to merge multiple files via an environment variable.&lt;/p&gt;

    &lt;ol&gt;
      &lt;li&gt;
        &lt;p&gt;Temporarily merge the files in memory:&lt;/p&gt;

        &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;KUBECONFIG&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;~/.kube/config:./kubeconfig-new.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Write the result to a new file (flattened):&lt;/p&gt;

        &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl config view &lt;span class=&quot;nt&quot;&gt;--flatten&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; ~/.kube/config_new
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Replace the old config with the new one:&lt;/p&gt;

        &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;mv&lt;/span&gt; ~/.kube/config_new ~/.kube/config
&lt;span class=&quot;nb&quot;&gt;chmod &lt;/span&gt;600 ~/.kube/config
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;        &lt;/div&gt;
      &lt;/li&gt;
    &lt;/ol&gt;

    &lt;h4 id=&quot;check-and-switch&quot;&gt;4. Check and switch&lt;/h4&gt;

    &lt;p&gt;Now you can see all your clusters:&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# List contexts&lt;/span&gt;
kubectl config get-contexts

&lt;span class=&quot;c&quot;&gt;# Switch to new cluster&lt;/span&gt;
kubectl config use-context admin@prod-cluster

&lt;span class=&quot;c&quot;&gt;# Check connectivity&lt;/span&gt;
kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;&lt;strong&gt;Alternative (if you are using multiple files)&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;If you do not If you want to mix everything into a single &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~/.kube/config&lt;/code&gt; file, you can simply keep them separate in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~/.kube/configs/&lt;/code&gt; and add the path to them in your &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.bashrc&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.zshrc&lt;/code&gt;:&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;KUBECONFIG&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/.kube/config:&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/.kube/configs/prod-cluster.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;Or another option.&lt;/p&gt;

    &lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl config&lt;/code&gt; command set allows you to manipulate the configuration much cleaner. Since &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm&lt;/code&gt; always creates standard names (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubernetes&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubernetes-admin&lt;/code&gt;), the best strategy is to &lt;strong&gt;import the file with the new name context&lt;/strong&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl&lt;/code&gt; will automatically pull in the cluster and user associated with it.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;Import and rename script&lt;/strong&gt;&lt;/p&gt;

    &lt;p&gt;This approach does not edit &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;temp-k8s.yaml&lt;/code&gt;, but immediately merges it into the main config with the new names.&lt;/p&gt;

    &lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# 1. Upload the file&lt;/span&gt;
ssh k8sadmin@10.10.0.21 &lt;span class=&quot;s2&quot;&gt;&quot;sudo cat /etc/kubernetes/admin.conf&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; temp-k8s.yaml

&lt;span class=&quot;c&quot;&gt;# 2. Determine the name of the new cluster&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;NEW_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;k8s-cluster-prod&quot;&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# 3. Add the cluster, user, and context to your main ~/.kube/config&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# We pull the data from the temporary file using &apos;view&apos;&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Add the cluster&lt;/span&gt;
kubectl config set-cluster &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$NEW_NAME&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--server&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;kubectl config view &lt;span class=&quot;nt&quot;&gt;--kubeconfig&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;temp-k8s.yaml &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;jsonpath&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;{.clusters[0].cluster.server}&apos;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--certificate-authority-data&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;kubectl config view &lt;span class=&quot;nt&quot;&gt;--kubeconfig&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;temp-k8s.yaml &lt;span class=&quot;nt&quot;&gt;--raw&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;jsonpath&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;{.clusters[0].cluster.certificate-authority-data}&apos;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Add the user&lt;/span&gt;
kubectl config set-credentials &lt;span class=&quot;s2&quot;&gt;&quot;admin-&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$NEW_NAME&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--client-certificate-data&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;kubectl config view &lt;span class=&quot;nt&quot;&gt;--kubeconfig&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;temp-k8s.yaml &lt;span class=&quot;nt&quot;&gt;--raw&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;jsonpath&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;{.users[0].user.client-certificate-data}&apos;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--client-key-data&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;kubectl config view &lt;span class=&quot;nt&quot;&gt;--kubeconfig&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;temp-k8s.yaml &lt;span class=&quot;nt&quot;&gt;--raw&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;jsonpath&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;{.users[0].user.client-key-data}&apos;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Creating context&lt;/span&gt;
kubectl config set-context &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$NEW_NAME&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cluster&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$NEW_NAME&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--user&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;admin-&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$NEW_NAME&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# 4. Delete the temporary file&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;rm &lt;/span&gt;temp-k8s.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

  &lt;/details&gt;

&lt;/details&gt;
      </content>
      <author>
        <name>Andrii Holovin</name>
      </author>

      
        <category term="DevOps"/>
        <category term="Infrastructure"/>
        <category term="Multipass"/>
        <category term="cloud-init"/>
        <category term="Netplan"/>
        <category term="etcd"/>
        <category term="k8s"/>
        <category term="Kubernetes"/>
        <summary type="html">In this guide, we will explore the steps to deploy a High Availability (HA) Kubernetes cluster with an external etcd topology on a local machine running macOS. We will use Multipass to create virtual machines, cloud-init for their initialization, kubeadm for cluster initialization, HAProxy as a load balancer for control plane nodes, Calico as a Container Network Interface (CNI), and MetalLB for load balancing traffic to worker nodes.</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Adding a static IP address to Multipass virtual machines on macOS</title>
      <link href="https://blog.andygol.co.ua/en/2025/12/26/static-ip-for-multipass-vm/" rel="alternate" type="text/html" title="Adding a static IP address to Multipass virtual machines on macOS"/>
      <published>2025-12-26T06:30:00+00:00</published>
      <updated>2025-12-26T06:30:00+00:00</updated>
      <id>https://blog.andygol.co.ua/en/2025/12/26/static-ip-for-multipass-vm</id>
      <content type="html" xml:base="https://blog.andygol.co.ua/en/2025/12/26/static-ip-for-multipass-vm/">
        &lt;p&gt;This guide contains a detailed description of how to add a static IP address to a Multipass virtual machine on macOS. A general description of how to do this can be found in the official documentation in the &lt;a href=&quot;https://documentation.ubuntu.com/multipass/latest/how-to-guides/manage-instances/configure-static-ips/&quot;&gt;Configure static IPs&lt;/a&gt; section, however on macOS without certain modifications it is not possible to follow these recommendations.&lt;/p&gt;

&lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/X_PqiMvaE08?si=1JVrvxjHMduGariC&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;

&lt;p&gt;&lt;a id=&quot;default-eth&quot;&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;adding-ip-address-to-the-first-network-interface-of-the-virtual-machine&quot;&gt;Adding IP address to the first network interface of the virtual machine&lt;/h2&gt;

&lt;h3 id=&quot;finding-multipass-bridge&quot;&gt;Finding Multipass bridge&lt;/h3&gt;

&lt;p&gt;To access virtual machines, Multipass (on macOS) creates a bridge named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bridge100&lt;/code&gt;. This bridge is used to provide virtual machines with access to the host network. IP addresses are assigned via DHCP, and when a virtual machine is (re)created, it is assigned the next available IP address from the bridge network.&lt;/p&gt;

&lt;p&gt;Running the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;multipass networks&lt;/code&gt; command should show you a list of available network interfaces.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/12/2025-12-26-multipass-networks.png&quot; alt=&quot;multipass networks&quot; /&gt;&lt;/p&gt;

&lt;p&gt;As you can see in this list, our bridge is not there. The result of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;multipass networks&lt;/code&gt; confirms the main limitation of Multipass on macOS: it “sees” only physical network adapters (Wi-Fi, Ethernet, USB). Created virtual bridges (bridge100) are ignored because they do not have the hardware profile that the Apple virtualization driver expects.&lt;/p&gt;

&lt;p&gt;Let’s try to find the bridge another way.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Find bridge (usually bridge100)&lt;/span&gt;
ifconfig | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-A&lt;/span&gt; 2 &lt;span class=&quot;s2&quot;&gt;&quot;^bridge&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The response should be similar to the following:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/12/2025-12-26-grep-bridge.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remember the bridge name&lt;/strong&gt; (for example, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bridge100&lt;/code&gt;). We will use it further.&lt;/p&gt;

&lt;p&gt;In my case, the bridge is available at address &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;192.168.2.1/24&lt;/code&gt; and the DHCP server allocates addresses to virtual machines from the range &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;192.168.2.X&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ifconfig -v bridge100&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/12/2025-12-26-ifconfig-bridge100.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;configuring-bridge-on-the-host&quot;&gt;Configuring bridge on the host&lt;/h3&gt;

&lt;p&gt;Assume that we need to provide virtual machines with static IP addresses for the network &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.0/24&lt;/code&gt;. For this, we will create an alias to our existing bridge.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Add IP address to bridge&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;ifconfig bridge100 10.10.0.1/24 &lt;span class=&quot;nb&quot;&gt;alias&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Check that it was added&lt;/span&gt;
ifconfig bridge100 | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;inet &quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Expected result:&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;inet 192.168.2.1 netmask 0xffffff00 broadcast 192.168.2.255
inet 10.10.0.1 netmask 0xffffff00 broadcast 10.10.0.255
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/12/2025-12-26-bridge100-alias.png&quot; alt=&quot;bridge100 alias&quot; /&gt;&lt;/p&gt;

&lt;p&gt;or using a script&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;TARGET_BRIDGE&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;ifconfig &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-B&lt;/span&gt; 20 &lt;span class=&quot;s2&quot;&gt;&quot;member: vmenet&quot;&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;bridge&quot;&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;awk&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-F&lt;/span&gt;: &lt;span class=&quot;s1&quot;&gt;&apos;{print $1}&apos;&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;head&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; 1&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-z&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$TARGET_BRIDGE&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;then
    &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Error: bridge not found. Check if VM is running.&quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;else
    &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;VM found on &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$TARGET_BRIDGE&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;. Assigning 10.10.0.1...&quot;&lt;/span&gt;
    &lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;ifconfig &lt;span class=&quot;nv&quot;&gt;$TARGET_BRIDGE&lt;/span&gt; 10.10.0.1/24 &lt;span class=&quot;nb&quot;&gt;alias
&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;creating-cloud-init-configuration-for-vm&quot;&gt;Creating cloud-init configuration for VM&lt;/h3&gt;

&lt;p&gt;Let’s create the following cloud-init configuration file that contains settings for the network &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;10.10.0.0/24&lt;/code&gt;&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; multipass-static-ip.yaml &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;
#cloud-config

write_files:
  - path: /etc/netplan/60-static-ip.yaml
    permissions: &apos;0600&apos;
    content: |
      network:
        version: 2
        ethernets:
          default:
            dhcp4: true
            addresses:
              - 10.10.0.10/24
            routes:
              - to: default
                via: 10.10.0.1
                metric: 200

runcmd:
  - netplan apply

hostname: test-vm
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;To configure the network in Ubuntu, we use &lt;a href=&quot;https://netplan.io&quot;&gt;Netplan&lt;/a&gt;. We add the settings for it to the file &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/netplan/60-static-ip.yaml&lt;/code&gt;. The file contents are located in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;content:&lt;/code&gt; field. Note the line &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;permissions: &apos;0600&apos;&lt;/code&gt;, which sets read-write permissions for the root user only. If the permissions are too permissive, Netplan will notify you and will not apply the settings. The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;netplan apply&lt;/code&gt; command applies the settings from the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/netplan/&lt;/code&gt; folder. The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;hostname&lt;/code&gt; field contains the name for our virtual machine, which will be added to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/hostname&lt;/code&gt; file.&lt;/p&gt;

&lt;h3 id=&quot;launching-vm-and-checking-operation&quot;&gt;Launching VM and checking operation&lt;/h3&gt;

&lt;p&gt;Let’s create our virtual machine&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Create VM with cloud-init configuration&lt;/span&gt;
multipass launch &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; test-vm &lt;span class=&quot;nt&quot;&gt;--cloud-init&lt;/span&gt; multipass-static-ip.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Multipass will create a virtual machine with the name specified in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--name/-n&lt;/code&gt; parameter and will use the &lt;a href=&quot;https://cloud-init.io&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cloud-init&lt;/code&gt;&lt;/a&gt; settings from the file specified in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--cloud-init&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let’s check the network settings of our virtual machine&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Check IP addresses on VM&lt;/span&gt;
multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; test-vm &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; ip addr show enp0s1

&lt;span class=&quot;c&quot;&gt;# or use netplan&lt;/span&gt;
multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; test-vm &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; netplan status

&lt;span class=&quot;c&quot;&gt;# Should show two IPs:&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# - 192.168.2.x (DHCP)&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# - 10.10.0.10 (static)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/12/2025-12-26-test-vm-netplan-status.png&quot; alt=&quot;test-vm netplan status&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Now it’s time to check the connection&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# From host to VM&lt;/span&gt;
ping &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; 4 10.10.0.10
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/12/2025-12-26-test-vm-ping-from-host.png&quot; alt=&quot;ping from host to VM&quot; /&gt;&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# From VM to host&lt;/span&gt;
multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; test-vm &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; ping &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; 4 10.10.0.1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/12/2025-12-26-test-vm-ping-from-vm-to-host.png&quot; alt=&quot;ping from VM to host&quot; /&gt;&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Check internet access from VM&lt;/span&gt;
multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; test-vm &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; ping &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; 4 8.8.8.8
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/12/2025-12-26-test-vm-ping-from-vm-to-internet.png&quot; alt=&quot;ping to internet from VM&quot; /&gt;&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Static IP address works!&lt;/strong&gt;&lt;/p&gt;

&lt;h3 id=&quot;technical-details&quot;&gt;Technical details&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Output list of configuration files&lt;/span&gt;
multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; test-vm &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sudo ls&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-la&lt;/span&gt; /etc/netplan

&lt;span class=&quot;c&quot;&gt;# Get merged network configuration&lt;/span&gt;
multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; test-vm &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;netplan get
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We see that in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/netplan&lt;/code&gt; directory there are files &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;50-cloud-init.yaml&lt;/code&gt;, which is created by cloud-init during virtual machine initialization, and the file &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;60-static-ip.yaml&lt;/code&gt;, which we passed in the settings.&lt;/p&gt;

&lt;p&gt;Running the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sudo netplan get&lt;/code&gt; command will give us the merged network configuration. Compare it with the content of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;50-cloud-init.yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Get content of 50-cloud-init.yaml&lt;/span&gt;
multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; test-vm &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sudo cat&lt;/span&gt; /etc/netplan/50-cloud-init.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Pay attention to the name of the network interface that &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cloud-init&lt;/code&gt; uses. This is the one we used in our settings (not &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;enp0s1&lt;/code&gt;).&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;network&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;version&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;2&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;ethernets&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;default&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;# network interface name&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;match&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;macaddress&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;52:54:00:ae:24:22&quot;&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;dhcp-identifier&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;mac&quot;&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;dhcp4&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;a id=&quot;enp0s2&quot;&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;adding-static-ip-address-to-the-second-network-interface-of-the-virtual-machine&quot;&gt;Adding static IP address to the second network interface of the virtual machine&lt;/h2&gt;

&lt;p&gt;In addition to adding a static IP address to the first network interface, we can do this for other network interfaces of the virtual machine. For this, on the host, you need to add/create the corresponding network bridge.&lt;/p&gt;

&lt;h3 id=&quot;multipass-bridge-for-the-second-network-interface&quot;&gt;Multipass bridge for the second network interface&lt;/h3&gt;

&lt;p&gt;Let’s launch a temporary VM to create a new bridge (bridge101)&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;multipass launch &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; sandbox-vm &lt;span class=&quot;nt&quot;&gt;--network&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;en0,mode&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;manual
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The parameter &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--network name=en0,mode=manual&lt;/code&gt; will create a new network interface &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;enp0s2&lt;/code&gt; in the virtual machine, which will be bound to a new bridge. However, after creation, this interface will be inactive because no IP address was assigned to it.&lt;/p&gt;

&lt;p&gt;The (dis)advantage of this approach is that after deleting all virtual machines bound to this bridge, it will be automatically removed from the system. It exists as long as there are virtual machines bound to it.&lt;/p&gt;

&lt;p&gt;This command allows you to find the name of this bridge:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ifconfig &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-B&lt;/span&gt; 20 &lt;span class=&quot;s2&quot;&gt;&quot;member: vmenet&quot;&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;bridge&quot;&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;awk&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-F&lt;/span&gt;: &lt;span class=&quot;s1&quot;&gt;&apos;{print $1}&apos;&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;tail&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; 1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Most likely the name will be &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bridge101&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let’s add an alias to the bridge&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Add IP address to bridge&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;ifconfig bridge101 10.10.1.1/24 &lt;span class=&quot;nb&quot;&gt;alias&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Check that it was added&lt;/span&gt;
ifconfig bridge101 | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;inet &quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Expected result:&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;inet 10.10.1.1 netmask 0xffffff00 broadcast 10.10.1.255
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/12/2025-12-26-bridge101-alias.png&quot; alt=&quot;bridge101 alias&quot; /&gt;&lt;/p&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;TARGET_BRIDGE&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;ifconfig &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-B&lt;/span&gt; 20 &lt;span class=&quot;s2&quot;&gt;&quot;member: vmenet&quot;&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;bridge&quot;&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;awk&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-F&lt;/span&gt;: &lt;span class=&quot;s1&quot;&gt;&apos;{print $1}&apos;&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;tail&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; 1&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-z&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$TARGET_BRIDGE&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;then
    &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Error: bridge not found. Check if VM is running.&quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;else
    &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;VM found on &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$TARGET_BRIDGE&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;. Assigning 10.10.1.1...&quot;&lt;/span&gt;
    &lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;ifconfig &lt;span class=&quot;nv&quot;&gt;$TARGET_BRIDGE&lt;/span&gt; 10.10.1.1/24 &lt;span class=&quot;nb&quot;&gt;alias
&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;creating-cloud-init-configuration-for-the-second-network-interface-vm&quot;&gt;Creating cloud-init configuration for the second network interface VM&lt;/h3&gt;

&lt;p&gt;Just like in the first case, let’s create a cloud-init configuration for setting up the virtual machine’s network interface.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; multipass-static-ip1.yaml &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;sh&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&apos;
#cloud-config

write_files:
  - path: /etc/netplan/60-custom-network.yaml
    permissions: &apos;0600&apos;
    content: |
      network:
        version: 2
        ethernets:
          enp0s2:
            addresses:
              - 10.10.1.20/24
            # WE REMOVE &quot;via: 10.10.0.1&quot; (default gateway)
            # Instead, we just allow direct access to the 10.10.1.0/24 network
            routes:
              - to: 10.10.1.0/24
                scope: link
runcmd:
  - netplan apply
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;launching-and-checking-the-virtual-machine-operation&quot;&gt;Launching and checking the virtual machine operation&lt;/h3&gt;

&lt;p&gt;Let’s launch a virtual machine named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;test-vm1&lt;/code&gt;.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Create a VM with cloud-init configuration&lt;/span&gt;
multipass launch &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; test-vm1 &lt;span class=&quot;nt&quot;&gt;--network&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;en0,mode&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;manual &lt;span class=&quot;nt&quot;&gt;--cloud-init&lt;/span&gt; multipass-static-ip1.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Wait for the VM creation and launch process to complete. After that, we can delete our temporary virtual machine, which we used to have the system create a new network bridge.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;multipass delete sandbox-vm &lt;span class=&quot;nt&quot;&gt;--purge&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Creating the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;test-vm1&lt;/code&gt; virtual machine is the same as creating &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;test-vm&lt;/code&gt;, with the difference that it will have two network interfaces.&lt;/p&gt;

&lt;p&gt;Now let’s check the virtual machine’s network interface settings&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Check the IP addresses on the VM&lt;/span&gt;
multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; test-vm1 &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; ip addr show

&lt;span class=&quot;c&quot;&gt;# or use netplan&lt;/span&gt;
multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; test-vm1 &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; netplan status
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You should see two IPs:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;192.168.2.x (DHCP) on enp0s1&lt;/li&gt;
  &lt;li&gt;10.10.1.20 (static) on enp0s2&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/12/2025-12-26-test-vm1-netplan-status.png&quot; alt=&quot;test-vm1 netplan status&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Let’s check the connection&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# From host to VM1&lt;/span&gt;
ping &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; 4 10.10.1.20
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/12/2025-12-26-test-vm1-ping-from-host.png&quot; alt=&quot;ping from host to VM1&quot; /&gt;&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# From VM1 to host&lt;/span&gt;
multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; test-vm1 &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; ping &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; 4 10.10.1.1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/12/2025-12-26-test-vm1-ping-from-vm-to-host.png&quot; alt=&quot;ping from VM1 to host&quot; /&gt;&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Traffic between VM and VM1&lt;/span&gt;
multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; test-vm1 &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; ping &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; 4 10.10.0.10
multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; test-vm &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; ping &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; 4 10.10.1.20
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/12/2025-12-26-test-vm1-ping-from-vm1-to-vm.png&quot; alt=&quot;Traffic between VM and VM1&quot; /&gt;&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Check Internet access from VM1&lt;/span&gt;
multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; test-vm1 &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; ping &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; 4 8.8.8.8
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/12/2025-12-26-test-vm-ping-from1-vm-to-internet.png&quot; alt=&quot;Check Internet access from VM1&quot; /&gt;&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;The static IP address on the second network interface is working!&lt;/strong&gt;&lt;/p&gt;

&lt;h2 id=&quot;cleaning-up&quot;&gt;Cleaning up&lt;/h2&gt;

&lt;p&gt;Delete virtual machines with the command &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;multipass delete &amp;lt;name-vm&amp;gt; --purge&lt;/code&gt; or all at once with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;multipass delete --all --purge&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;After deleting all virtual machines that were attached to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bridge101&lt;/code&gt; bridge, it will be removed automatically.&lt;/p&gt;

&lt;p&gt;The result of executing &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ifconfig -v bridge101&lt;/code&gt; will be&lt;/p&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;ifconfig: interface bridge101 does not exist
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;To remove the alias from &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bridge100&lt;/code&gt;, execute &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sudo ifconfig bridge100 -alias 10.10.0.1&lt;/code&gt;. You can also clear the cache with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sudo arp -d -a&lt;/code&gt;.&lt;/p&gt;

&lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;p&gt;With these instructions, you can add static addresses to Multipass virtual machines. This can be useful when you need to use a pool of pre-allocated addresses.&lt;/p&gt;

&lt;p&gt;⚠️ This guide was created and tested for macOS. Working with other operating systems may differ depending on the features and approaches you use.&lt;/p&gt;
      </content>
      <author>
        <name>Andrii Holovin</name>
      </author>

      
        <category term="DevOps"/>
        <category term="Infrastructure"/>
        <category term="Multipass"/>
        <category term="cloud-init"/>
        <category term="Netplan"/>
        <summary type="html">This guide contains a detailed description of how to add a static IP address to a Multipass virtual machine on macOS. A general description of how to do this can be found in the official documentation in the Configure static IPs section, however on macOS without certain modifications it is not possible to follow these recommendations.</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Deploying a Kubernetes Cluster on a Local Machine: A Complete Step-by-Step Guide</title>
      <link href="https://blog.andygol.co.ua/en/2025/12/12/k8s-cluster-with-kubeadm/" rel="alternate" type="text/html" title="Deploying a Kubernetes Cluster on a Local Machine: A Complete Step-by-Step Guide"/>
      <published>2025-12-12T06:30:00+00:00</published>
      <updated>2025-12-12T06:30:00+00:00</updated>
      <id>https://blog.andygol.co.ua/en/2025/12/12/k8s-cluster-with-kubeadm</id>
      <content type="html" xml:base="https://blog.andygol.co.ua/en/2025/12/12/k8s-cluster-with-kubeadm/">
        &lt;blockquote&gt;
  &lt;p&gt;💡 You won’t find such a detailed step-by-step guide in the official Kubernetes documentation! The official docs contain separate recommendations. Here everything is gathered in one place so you can quickly and easily create your first cluster.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;what-well-do&quot;&gt;What we’ll do&lt;/h2&gt;

&lt;p&gt;We’ll create a full Kubernetes cluster on your local machine. If you have separate physical machines, you can adapt this guide to run on physical hardware. For this guide we will use:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Multipass&lt;/strong&gt; — tool to create Ubuntu virtual machines&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;kubeadm&lt;/strong&gt; — primary tool to initialize the cluster&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Flannel&lt;/strong&gt; — CNI plugin to create Pod networking&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;MetalLB&lt;/strong&gt; — load balancer for our cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our cluster will consist of &lt;strong&gt;three nodes&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;1 control plane node&lt;/li&gt;
  &lt;li&gt;2 worker nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/ji3nKGN16hQ?si=ZHCl9hJyLgmTPLCn&quot; title=&quot;YouTube video player&quot; frameborder=&quot;0&quot; allow=&quot;accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;

&lt;h3 id=&quot;prerequisites&quot;&gt;Prerequisites&lt;/h3&gt;

&lt;p&gt;Before starting, install &lt;a href=&quot;https://canonical.com/multipass&quot;&gt;Multipass&lt;/a&gt;:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;brew &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;multipass
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;step-1-create-virtual-machines&quot;&gt;Step 1: Create virtual machines&lt;/h2&gt;

&lt;h3 id=&quot;define-the-list-of-nodes&quot;&gt;Define the list of nodes&lt;/h3&gt;

&lt;p&gt;Create an array with the names of the three virtual machines. This is just a simple list of the machines we need.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;NODES&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=(&lt;/span&gt;k8s-control k8s-worker-1 k8s-worker-2&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;create-vms&quot;&gt;Create VMs&lt;/h3&gt;

&lt;p&gt;Now use multipass to create three Ubuntu virtual machines.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;NODE &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;NODES&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[@]&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;multipass launch &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$NODE&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--cpus&lt;/span&gt; 2 &lt;span class=&quot;nt&quot;&gt;--memory&lt;/span&gt; 4G &lt;span class=&quot;nt&quot;&gt;--disk&lt;/span&gt; 20G
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;for NODE in &quot;${NODES[@]}&quot;&lt;/code&gt; loop we iterate over each name; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;multipass launch --name $NODE&lt;/code&gt; creates a VM with the given name and these parameters:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--cpus 2&lt;/code&gt; — allocate 2 CPU cores (minimum for K8s)&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--memory 4G&lt;/code&gt; — allocate 4 GB of RAM&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--disk 20G&lt;/code&gt; — allocate 20 GB of disk space&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Multipass will automatically download Ubuntu. This will take a few minutes ☕.&lt;/p&gt;

&lt;h2 id=&quot;step-2-prepare-all-nodes&quot;&gt;Step 2: Prepare all nodes&lt;/h2&gt;

&lt;p&gt;Once the virtual machines are created, start configuring them. These steps need to be executed on all three machines.&lt;/p&gt;

&lt;h3 id=&quot;21-system-update&quot;&gt;2.1. System update&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;=== [1/7] Updating system on all nodes ===&quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;NODE &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;NODES&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[@]&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$NODE&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; bash &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;
    sudo apt-get update &amp;amp;&amp;amp;
    sudo apt-get upgrade -y
  &quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;multipass exec $NODE&lt;/code&gt; — runs a command on the VM&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sudo apt-get update&lt;/code&gt; — updates the package lists&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sudo apt-get upgrade -y&lt;/code&gt; — installs all upgrades; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-y&lt;/code&gt; answers yes to prompts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s important to start with an up-to-date system with the latest security fixes and package updates.&lt;/p&gt;

&lt;p&gt;After this command all packages will be updated to the latest versions.&lt;/p&gt;

&lt;h3 id=&quot;22-disable-firewall&quot;&gt;2.2. Disable firewall&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;=== [2/7] Disabling firewall on all nodes ===&quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;NODE &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;NODES&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[@]&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$NODE&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;ufw disable
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;UFW is Ubuntu’s firewall. For a learning cluster we disable the firewall to avoid networking issues between nodes.&lt;/p&gt;

&lt;p&gt;⚠️ &lt;strong&gt;In production&lt;/strong&gt; you should configure the firewall properly and open required ports!&lt;/p&gt;

&lt;h3 id=&quot;23-load-kernel-modules&quot;&gt;2.3. Load kernel modules&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;=== [3/7] Configuring kernel modules ===&quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;NODE &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;NODES&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[@]&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$NODE&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; bash &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;
    echo -e &apos;overlay&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;br_netfilter&apos; | sudo tee /etc/modules-load.d/k8s.conf
    sudo modprobe overlay
    sudo modprobe br_netfilter
  &quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Kubernetes requires these two Linux kernel modules to be enabled:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;overlay&lt;/code&gt; — for container filesystem layering&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;br_netfilter&lt;/code&gt; — for network connectivity between containers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first line writes these modules to a config file so they’re loaded at boot. The next two lines load them now.&lt;/p&gt;

&lt;h3 id=&quot;24-configure-sysctl-networking-parameters&quot;&gt;2.4. Configure sysctl networking parameters&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;=== [4/7] Configuring networking sysctl parameters ===&quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;NODE &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;NODES&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[@]&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$NODE&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; bash &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;
    cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
    sudo sysctl --system
  &quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We configure kernel networking settings:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;bridge-nf-call-iptables — allows iptables to see bridged traffic (IPv4 and IPv6)&lt;/li&gt;
  &lt;li&gt;ip_forward — enables packet forwarding between interfaces&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sysctl --system&lt;/code&gt; applies these settings immediately.&lt;/p&gt;

&lt;p&gt;This is critical for Kubernetes networking!&lt;/p&gt;

&lt;h3 id=&quot;25-install-containerd&quot;&gt;2.5. Install containerd&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;=== [5/7] Installing containerd ===&quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;NODE &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;NODES&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[@]&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$NODE&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;apt-get &lt;span class=&quot;nb&quot;&gt;install&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; containerd
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Containerd is the runtime that runs and manages containers. Kubernetes supports multiple runtimes; containerd is a recommended and popular choice.&lt;/p&gt;

&lt;h3 id=&quot;26-configure-containerd&quot;&gt;2.6. Configure containerd&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;=== [6/7] Configuring containerd and CRI ===&quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;NODE &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;NODES&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[@]&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$NODE&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; bash &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;
    sudo mkdir -p /etc/containerd
    containerd config default | sudo tee /etc/containerd/config.toml

    # Update the sandbox image
    sudo sed -i &apos;s/registry.k8s.io&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;/pause:3.8/registry.k8s.io&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;/pause:3.10.1/&apos; /etc/containerd/config.toml

    # Enable cgroup via systemd
    sudo sed -i &apos;s/SystemdCgroup = false/SystemdCgroup = true/&apos; /etc/containerd/config.toml

    # Add crictl config
    sudo tee /etc/crictl.yaml &amp;lt;&amp;lt;EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

    sudo systemctl restart containerd
    sudo systemctl enable containerd
  &quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What happens here:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Generate the default containerd config into /etc/containerd/config.toml&lt;/li&gt;
  &lt;li&gt;Update the Kubernetes pause image to version 3.10.1&lt;/li&gt;
  &lt;li&gt;Enable SystemdCgroup to use systemd for cgroup management&lt;/li&gt;
  &lt;li&gt;Configure crictl to talk to containerd&lt;/li&gt;
  &lt;li&gt;Restart and enable containerd&lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;27-install-kubernetes-components&quot;&gt;2.7. Install Kubernetes components&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;=== [7/7] Installing Kubernetes components ===&quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;NODE &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;NODES&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[@]&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$NODE&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; bash &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;
    sudo apt-get install -y apt-transport-https ca-certificates curl gpg

    curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
      | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

    echo &apos;deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /&apos; &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
      | sudo tee /etc/apt/sources.list.d/kubernetes.list

    sudo apt-get update
    sudo apt-get install -y kubelet kubeadm kubectl
    sudo apt-mark hold kubelet kubeadm kubectl
    sudo systemctl enable kubelet
  &quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;ol&gt;
  &lt;li&gt;Prepare tools for secure package installation (HTTPS, certificates, GPG)&lt;/li&gt;
  &lt;li&gt;Add the official Kubernetes signing key&lt;/li&gt;
  &lt;li&gt;Add the Kubernetes v1.34 repository&lt;/li&gt;
  &lt;li&gt;Install:
    &lt;ul&gt;
      &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubelet&lt;/code&gt; — agent on each node&lt;/li&gt;
      &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm&lt;/code&gt; — initializer tool&lt;/li&gt;
      &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl&lt;/code&gt; — CLI client&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;apt-mark hold&lt;/code&gt; prevents automatic updates; component versions must match across the cluster. Enable kubelet.&lt;/p&gt;

&lt;h3 id=&quot;28-done-&quot;&gt;2.8. Done! ✅&lt;/h3&gt;

&lt;p&gt;These steps can be combined into a single script to prepare all nodes for cluster creation.&lt;/p&gt;

&lt;p&gt;All three machines are now ready to become part of a Kubernetes cluster.&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h2 id=&quot;step-3-initialize-the-control-plane&quot;&gt;Step 3: Initialize the control plane&lt;/h2&gt;

&lt;h3 id=&quot;connect-to-the-control-plane&quot;&gt;Connect to the control plane&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;multipass shell k8s-control
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This opens a shell inside the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;k8s-control&lt;/code&gt; VM. Now we’re working directly on that machine.&lt;/p&gt;

&lt;h3 id=&quot;get-the-control-plane-ip&quot;&gt;Get the control plane IP&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;CONTROL_IP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;hostname&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-I&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;awk&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;{print $1}&apos;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;hostname -I&lt;/code&gt; — shows all IP addresses of the machine&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;awk &apos;{print $1}&apos;&lt;/code&gt; — extracts the first address&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;$()&lt;/code&gt; — stores the result in CONTROL_IP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We need the IP so worker nodes know where to connect.&lt;/p&gt;

&lt;h3 id=&quot;initialize-the-cluster-&quot;&gt;Initialize the cluster 🚀&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm init &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--pod-network-cidr&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;10.244.0.0/16 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--apiserver-advertise-address&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$CONTROL_IP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;kubeadm init initializes the control plane (API server, scheduler, controller-manager). At the end it prints a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm join&lt;/code&gt; command to run on worker nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parameters:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--pod-network-cidr=10.244.0.0/16&lt;/code&gt; — Pod network range (for Flannel)&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--apiserver-advertise-address&lt;/code&gt; — IP the API server advertises&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⏱️ This takes 1–2 minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📝 Save the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm join&lt;/code&gt; command output — you’ll need it for workers!&lt;/strong&gt;&lt;/p&gt;

&lt;h3 id=&quot;configure-kubectl&quot;&gt;Configure kubectl&lt;/h3&gt;

&lt;p&gt;kubeadm also prints instructions to configure kubectl for the current user:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; ~/.kube
&lt;span class=&quot;nb&quot;&gt;sudo cp&lt;/span&gt; /etc/kubernetes/admin.conf ~/.kube/config
&lt;span class=&quot;nb&quot;&gt;sudo chown&lt;/span&gt; &lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;id&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-u&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;:&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;id&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-g&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt; ~/.kube/config
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;kubectl requires a configuration file to connect to the cluster.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;mkdir -p ~/.kube&lt;/code&gt; — create kube config directory&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cp admin.conf ~/.kube/config&lt;/code&gt; — copy admin kubeconfig&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;chown&lt;/code&gt; — change ownership so kubectl can run without sudo&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;step-4-install-cni-plugin--flannel&quot;&gt;Step 4: Install CNI plugin — Flannel&lt;/h2&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://github.com/flannel-io/flannel#deploying-flannel-manually&quot;&gt;Flannel&lt;/a&gt;&lt;/strong&gt; is a Container Network Interface (CNI) plugin that enables Pod networking across nodes.&lt;/p&gt;

&lt;p&gt;Kubernetes does not provide cluster networking by itself; Flannel deploys the necessary DaemonSet, ConfigMap, ServiceAccount, and other resources.&lt;/p&gt;

&lt;h2 id=&quot;step-5-join-worker-nodes&quot;&gt;Step 5: Join worker nodes&lt;/h2&gt;

&lt;p&gt;During &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm init&lt;/code&gt; you received a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm join&lt;/code&gt; command. Run it on each worker node.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;NODE &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;k8s-worker-1 k8s-worker-2&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;multipass &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$NODE&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;kubeadm &lt;span class=&quot;nb&quot;&gt;join &lt;/span&gt;192.168.2.26:6443 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--token&lt;/span&gt; bsw6fd.e7624wl2688fybjx &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:7850aa1c6181277e284a08b81256979db25698a89982f0885540376a5376e0bd
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;blockquote&gt;
  &lt;p&gt;⚠️ &lt;strong&gt;IMPORTANT:&lt;/strong&gt; In your case the IP, token and hash will be &lt;strong&gt;different&lt;/strong&gt;! Use the command that &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubeadm init&lt;/code&gt; printed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What this does:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;multipass exec $NODE --&lt;/code&gt; — runs the join command on each worker&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;192.168.2.14:6443&lt;/code&gt; — API server address (port 6443)&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--token&lt;/code&gt; — temporary token generated by kubeadm&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--discovery-token-ca-cert-hash&lt;/code&gt; — CA cert SHA256 hash to verify authenticity&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;step-6-verify-the-cluster&quot;&gt;Step 6: Verify the cluster&lt;/h2&gt;

&lt;p&gt;Return to the control plane node and check node status.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;multipass shell k8s-control
kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You should see three nodes with STATUS &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Ready&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-text highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;NAME            STATUS   ROLES           AGE   VERSION
k8s-control     Ready    control-plane   5m    v1.34.0
k8s-worker-1    Ready    &amp;lt;none&amp;gt;          2m    v1.34.0
k8s-worker-2    Ready    &amp;lt;none&amp;gt;          2m    v1.34.0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If a node shows &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;NotReady&lt;/code&gt;, wait a minute — Flannel may still be provisioning ⏳&lt;/p&gt;

&lt;h2 id=&quot;step-7-install-metallb&quot;&gt;Step 7: Install MetalLB&lt;/h2&gt;

&lt;h3 id=&quot;what-is-metallb&quot;&gt;What is MetalLB?&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;https://metallb.io&quot;&gt;MetalLB&lt;/a&gt;&lt;/strong&gt; is a load balancer implementation for bare-metal clusters. In cloud-managed Kubernetes, LoadBalancer services are provided by the cloud provider. For a local cluster, MetalLB provides similar functionality.&lt;/p&gt;

&lt;h3 id=&quot;apply-manifests&quot;&gt;Apply manifests&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/metallb/metallb/v0.15.3/config/manifests/metallb-native.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;wait-for-readiness&quot;&gt;Wait for readiness&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl &lt;span class=&quot;nb&quot;&gt;wait&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--namespace&lt;/span&gt; metallb-system &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--for&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;condition&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ready pod &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--selector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;metallb
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This waits until all MetalLB pods are ready.&lt;/p&gt;

&lt;h3 id=&quot;label-worker-nodes&quot;&gt;Label worker nodes&lt;/h3&gt;

&lt;p&gt;To label all nodes whose names start with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;k8s-worker-&lt;/code&gt;, get node names and apply a label to each.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;NODES&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;kubectl get nodes &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;jsonpath&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;{.items[*].metadata.name}&apos;&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;tr&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-s&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;[[:space:]]&apos;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;\n&apos;&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;^k8s-worker-&apos;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;NODE &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$NODES&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Applying label to node: &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$NODE&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
  kubectl label node &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$NODE&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; metallb-role&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;worker &lt;span class=&quot;nt&quot;&gt;--overwrite&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We add the label &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;metallb-role=worker&lt;/code&gt; to worker nodes. Labels help select resources in Kubernetes. MetalLB will run only on nodes with this label.&lt;/p&gt;

&lt;h3 id=&quot;configure-ip-pool-and-l2advertisement&quot;&gt;Configure IP pool and L2Advertisement&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;CONTROL_IP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;hostname&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-I&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;awk&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;{print $1}&apos;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;BASE_IP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$CONTROL_IP&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;cut&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-f1-3&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - &lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;BASE_IP&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;.200-&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;BASE_IP&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: worker-nodes-l2
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool
  nodeSelectors:
  - matchLabels:
      metallb-role: worker
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What this does:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Get the base IP from the control plane IP (e.g. if control IP is &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;192.168.2.14&lt;/code&gt;, BASE_IP becomes &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;192.168.2&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;IPAddressPool&lt;/code&gt; defines a range of external IPs MetalLB can allocate (.200–.250 in this example)&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;L2Advertisement&lt;/code&gt; configures MetalLB to announce those IPs via ARP (Layer 2)&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;nodeSelectors&lt;/code&gt; restrict MetalLB to nodes labeled &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;metallb-role=worker&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;step-8-demo-&quot;&gt;Step 8: Demo 🎉&lt;/h2&gt;

&lt;h3 id=&quot;create-a-deployment&quot;&gt;Create a Deployment&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl create deployment hello &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--image&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;nginxdemos/hello:plain-text &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--replicas&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;3 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--port&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;80
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Creates 3 replicas of a simple nginx demo app.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;deployment&lt;/code&gt; — manages a set of identical Pods&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--image&lt;/code&gt; — Docker image for the containers&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--replicas=3&lt;/code&gt; — run 3 copies&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--port=80&lt;/code&gt; — container port&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;create-a-loadbalancer-service&quot;&gt;Create a LoadBalancer Service&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl expose deployment hello &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--type&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;LoadBalancer &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--port&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;80
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;A LoadBalancer service will get an external IP from MetalLB.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;expose deployment&lt;/code&gt; — creates a Service for the Deployment&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--type=LoadBalancer&lt;/code&gt; — request an external IP (provided by MetalLB)&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--port=80&lt;/code&gt; — service port&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;test-load-balancing&quot;&gt;Test load balancing&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;EXTERNAL_IP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;kubectl get svc hello &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;jsonpath&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;{.status.loadBalancer.ingress[0].ip}&apos;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;LB IP: &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$EXTERNAL_IP&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;i &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;1..10&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Request &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$i&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;:&quot;&lt;/span&gt;
  curl &lt;span class=&quot;nt&quot;&gt;-s&lt;/span&gt; http://&lt;span class=&quot;nv&quot;&gt;$EXTERNAL_IP&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Server address&quot;&lt;/span&gt;
  &lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;---&quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This obtains the external IP allocated by MetalLB and makes 10 requests, showing which Pod served each request.&lt;/p&gt;

&lt;p&gt;Expected result: requests are distributed across the three Pods.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Request 1:
Server address: 10.244.1.2:80
---
Request 2:
Server address: 10.244.2.3:80
---
Request 3:
Server address: 10.244.1.4:80
---
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;-important-notes-about-using-swap&quot;&gt;🚨 Important notes about using SWAP&lt;/h2&gt;

&lt;p&gt;By default Multipass VMs have swap disabled.&lt;/p&gt;

&lt;p&gt;kubelet refuses to run if swap is enabled because swapping can cause unpredictable Pod behavior (containers being pushed to disk, slowing them down).&lt;/p&gt;

&lt;p&gt;In our case swap is disabled by default — so it’s fine! ✅&lt;/p&gt;

&lt;p&gt;If you deploy Kubernetes on systems where swap is enabled:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 1:&lt;/strong&gt; Turn off swap&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;swapoff &lt;span class=&quot;nt&quot;&gt;-a&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# And comment out swap in /etc/fstab&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Option 2:&lt;/strong&gt; Configure kubelet to run with &lt;a href=&quot;https://andygol-k8s.netlify.app/docs/concepts/cluster-administration/swap-memory-management/&quot;&gt;swap support&lt;/a&gt; (experimental in K8s 1.28+)&lt;/p&gt;

&lt;h2 id=&quot;useful-commands&quot;&gt;Useful commands&lt;/h2&gt;

&lt;h3 id=&quot;status-checks&quot;&gt;Status checks&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get nodes
kubectl get pods &lt;span class=&quot;nt&quot;&gt;-A&lt;/span&gt;
kubectl get svc
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;view-logs&quot;&gt;View logs&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl logs &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; kube-system &lt;span class=&quot;nt&quot;&gt;-l&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;flannel
kubectl logs &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; metallb-system &lt;span class=&quot;nt&quot;&gt;-l&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;app&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;metallb
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;delete-resources&quot;&gt;Delete resources&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl delete svc hello
kubectl delete deployment hello
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;stop-the-cluster&quot;&gt;Stop the cluster&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Stop all VMs&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;NODE &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;NODES&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[@]&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;multipass stop &lt;span class=&quot;nv&quot;&gt;$NODE&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Start again&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;NODE &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;NODES&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[@]&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;multipass start &lt;span class=&quot;nv&quot;&gt;$NODE&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;remove-the-cluster&quot;&gt;Remove the cluster&lt;/h3&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;NODE &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;NODES&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[@]&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;multipass delete &lt;span class=&quot;nv&quot;&gt;$NODE&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;done
&lt;/span&gt;multipass purge
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;p&gt;Congratulations! 🎉 You just created a full Kubernetes cluster on your local machine.&lt;/p&gt;

&lt;p&gt;What we did:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;✅ Created 3 virtual machines&lt;/li&gt;
  &lt;li&gt;✅ Configured containerd and kernel modules&lt;/li&gt;
  &lt;li&gt;✅ Initialized the control plane&lt;/li&gt;
  &lt;li&gt;✅ Joined worker nodes&lt;/li&gt;
  &lt;li&gt;✅ Installed Flannel for networking&lt;/li&gt;
  &lt;li&gt;✅ Configured MetalLB for LoadBalancer services&lt;/li&gt;
  &lt;li&gt;✅ Deployed a test app with load balancing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now you have a local environment for:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;experimenting with Kubernetes&lt;/li&gt;
  &lt;li&gt;testing manifests&lt;/li&gt;
  &lt;li&gt;learning cluster architecture&lt;/li&gt;
  &lt;li&gt;preparing for certifications (CKA, CKAD)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hope this guide was useful!&lt;/p&gt;

&lt;p&gt;If you have questions or issues — leave a comment. Share this post with colleagues who may find it helpful 🚀&lt;/p&gt;

&lt;p&gt;Happy Kubernetes! ☸️&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;disclaimer-&quot;&gt;Disclaimer 🚨&lt;/h2&gt;

&lt;p&gt;This cluster is intended for development and testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For production use you need additional configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Firewall and Network Policies&lt;/li&gt;
  &lt;li&gt;RBAC (Role-Based Access Control)&lt;/li&gt;
  &lt;li&gt;Secrets management&lt;/li&gt;
  &lt;li&gt;Monitoring and logging&lt;/li&gt;
  &lt;li&gt;etcd backups&lt;/li&gt;
  &lt;li&gt;High Availability control plane&lt;/li&gt;
  &lt;li&gt;Vulnerability scanning&lt;/li&gt;
&lt;/ul&gt;

&lt;hr /&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Preparation script &lt;a href=&quot;https://gist.github.com/Andygol/37d1397423e535bd0f7fabb593e81c41&quot;&gt;https://gist.github.com/Andygol/37d1397423e535bd0f7fabb593e81c41&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;
      </content>
      <author>
        <name>Andrii Holovin</name>
      </author>

      
        <category term="Kubernetes"/>
        <category term="Containers"/>
        <category term="Orchestration"/>
        <category term="DevOps"/>
        <category term="Infrastructure"/>
        <category term="kubeadm"/>
        <category term="flannel"/>
        <category term="containerd"/>
        <summary type="html">💡 You won’t find such a detailed step-by-step guide in the official Kubernetes documentation! The official docs contain separate recommendations. Here everything is gathered in one place so you can quickly and easily create your first cluster.</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">From Docker to Kubernetes: When Containers Stop Being Simple</title>
      <link href="https://blog.andygol.co.ua/en/2025/10/30/from-docker-to-kubernetes/" rel="alternate" type="text/html" title="From Docker to Kubernetes: When Containers Stop Being Simple"/>
      <published>2025-10-30T08:30:00+00:00</published>
      <updated>2025-10-30T08:30:00+00:00</updated>
      <id>https://blog.andygol.co.ua/en/2025/10/30/from-docker-to-kubernetes</id>
      <content type="html" xml:base="https://blog.andygol.co.ua/en/2025/10/30/from-docker-to-kubernetes/">
        &lt;p&gt;Wow, you’ve finally done it!&lt;br /&gt;
After days or even weeks of work — your app is fully &lt;strong&gt;Dockerized&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You’ve got a &lt;strong&gt;Node.js API&lt;/strong&gt;, a &lt;strong&gt;React frontend&lt;/strong&gt;, and a &lt;strong&gt;Postgres database&lt;/strong&gt;, all wrapped up nice and neat in their own little containers.&lt;br /&gt;
One &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker-compose up&lt;/code&gt;, and everything comes alive.&lt;br /&gt;
Your local setup feels like an orchestra — every container plays its part, and you’re the conductor.&lt;/p&gt;

&lt;p&gt;You feel proud. A real DevOps wizard.&lt;/p&gt;

&lt;h2 id=&quot;then-comes-production&quot;&gt;Then Comes Production&lt;/h2&gt;

&lt;p&gt;Everything runs perfectly on your laptop.&lt;br /&gt;
But once it’s time to deploy to real servers, the illusion of simplicity is gone.&lt;/p&gt;

&lt;p&gt;You want &lt;strong&gt;reliability&lt;/strong&gt;.&lt;br /&gt;
You need &lt;strong&gt;scalability&lt;/strong&gt;.&lt;br /&gt;
And it seems obvious:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;“I’ll just run these containers on a few servers.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Simple? Not really.&lt;/p&gt;

&lt;h2 id=&quot;when-containers-turn-into-chaos&quot;&gt;When Containers Turn Into Chaos&lt;/h2&gt;

&lt;p&gt;You hit your first wall:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;How will the frontend find the API when container IPs keep changing?&lt;/li&gt;
  &lt;li&gt;What happens when a server crashes at 4 AM? Who restarts the containers?&lt;/li&gt;
  &lt;li&gt;How can you update the API image without taking everything down?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You start writing &lt;strong&gt;Bash scripts&lt;/strong&gt;, copying images via SSH, and trying to balance traffic manually.&lt;br /&gt;
But every update feels risky.&lt;br /&gt;
Every crash — a small disaster.&lt;/p&gt;

&lt;p&gt;Your once-elegant Docker setup slowly turns into a fragile web of scripts and hope.&lt;/p&gt;

&lt;h2 id=&quot;enter-kubernetes-not-just-docker-on-steroids&quot;&gt;Enter Kubernetes: Not Just Docker on Steroids&lt;/h2&gt;

&lt;p&gt;Enter &lt;strong&gt;Kubernetes&lt;/strong&gt; (or simply &lt;strong&gt;K8s&lt;/strong&gt;)&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;You’ve probably heard about it — maybe you think it’s overly complicated or something only big tech companies use.&lt;br /&gt;
And yes, it is complex at first.&lt;br /&gt;
But that’s because &lt;strong&gt;Kubernetes solves a completely different problem&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Docker helps you &lt;strong&gt;package&lt;/strong&gt; an application.
Kubernetes helps you &lt;strong&gt;run&lt;/strong&gt; and &lt;strong&gt;manage&lt;/strong&gt; those applications &lt;strong&gt;at scale&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It doesn’t just start containers — it manages their entire lifecycle: deployment, scaling, self-healing, updates, and service discovery.&lt;/p&gt;

&lt;p&gt;And it all begins with a change in mindset.&lt;/p&gt;

&lt;h2 id=&quot;declarative-thinking-focus-on-what-not-how&quot;&gt;Declarative Thinking: Focus on “What,” Not “How”&lt;/h2&gt;

&lt;p&gt;In the Docker world, you act &lt;strong&gt;imperatively&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;“Start this container here. Stop that one there.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In Kubernetes, you act &lt;strong&gt;declaratively&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;“I want three replicas of my API running image v1.2.&lt;br /&gt;
Each should have 500 MB of RAM.&lt;br /&gt;
They should all be reachable via api-service.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You don’t tell the system how to do it.&lt;br /&gt;
You just describe what the desired end state should look like.&lt;/p&gt;

&lt;p&gt;Kubernetes constantly watches the actual state of the system and works to &lt;strong&gt;make it match your desired state&lt;/strong&gt; — automatically.&lt;/p&gt;

&lt;h2 id=&quot;how-kubernetes-solves-real-production-problems&quot;&gt;How Kubernetes Solves Real Production Problems&lt;/h2&gt;

&lt;h3 id=&quot;automated-scheduling--bin-packing&quot;&gt;Automated Scheduling &amp;amp; Bin Packing&lt;/h3&gt;

&lt;p&gt;Kubernetes sees all your servers (called &lt;strong&gt;nodes&lt;/strong&gt;) and decides where to run containers based on available resources.&lt;br /&gt;
It distributes workloads intelligently — no manual assignments needed.&lt;/p&gt;

&lt;h3 id=&quot;self-healing&quot;&gt;Self-Healing&lt;/h3&gt;

&lt;p&gt;If a container crashes or a node fails, Kubernetes immediately detects it.&lt;br /&gt;
Desired state: 3 replicas.&lt;br /&gt;
Actual state: 2 replicas.&lt;br /&gt;
It spins up a new one.&lt;/p&gt;

&lt;p&gt;No late-night SSH sessions, no manual restarts.&lt;br /&gt;
&lt;strong&gt;The system heals itself&lt;/strong&gt;.&lt;/p&gt;

&lt;h3 id=&quot;horizontal-scaling&quot;&gt;Horizontal Scaling&lt;/h3&gt;

&lt;p&gt;Traffic spikes? No problem.&lt;br /&gt;
You just update one line in your YAML:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;replicas&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;12&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Kubernetes launches more containers and spreads them across available nodes.&lt;/p&gt;

&lt;p&gt;Don’t want to do it manually?&lt;br /&gt;
Enable &lt;strong&gt;autoscaling&lt;/strong&gt;, and K8s will automatically adjust replica counts based on CPU or memory usage.&lt;/p&gt;

&lt;h3 id=&quot;service-discovery--load-balancing&quot;&gt;Service Discovery &amp;amp; Load Balancing&lt;/h3&gt;

&lt;p&gt;Containers don’t rely on IPs to find each other.&lt;br /&gt;
You create a &lt;strong&gt;Service&lt;/strong&gt; — an abstraction that gives your app a stable name (like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;api-service&lt;/code&gt;) and an internal IP.&lt;/p&gt;

&lt;p&gt;When the frontend calls &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;api-service&lt;/code&gt;, Kubernetes automatically routes the request to one of the healthy API instances.&lt;br /&gt;
Traffic is balanced automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No more hardcoded IPs. No more fragile networking hacks.&lt;/strong&gt;&lt;/p&gt;

&lt;h3 id=&quot;automated-rollouts--rollbacks&quot;&gt;Automated Rollouts &amp;amp; Rollbacks&lt;/h3&gt;

&lt;p&gt;Need to update your API to version v1.3?&lt;br /&gt;
Just change the image tag in your YAML.&lt;/p&gt;

&lt;p&gt;Kubernetes performs a &lt;strong&gt;rolling update&lt;/strong&gt; — gradually spinning up new v1.3 containers while shutting down the old v1.2 ones.&lt;br /&gt;
No downtime. No user impact.&lt;/p&gt;

&lt;p&gt;And if something goes wrong?&lt;br /&gt;
Kubernetes automatically &lt;strong&gt;rolls back&lt;/strong&gt; to the previous stable version.&lt;/p&gt;

&lt;h2 id=&quot;kubernetes-as-the-operating-system-for-your-applications&quot;&gt;Kubernetes as the Operating System for Your Applications&lt;/h2&gt;

&lt;p&gt;Kubernetes isn’t just another DevOps tool.&lt;br /&gt;
It’s an operating system for distributed systems.&lt;/p&gt;

&lt;p&gt;It handles resource management, updates, load balancing, recovery — everything that used to require dozens of scripts and sleepless nights.&lt;/p&gt;

&lt;p&gt;You no longer waste time babysitting servers.&lt;br /&gt;
You can focus on what really matters — &lt;strong&gt;building great software&lt;/strong&gt;.&lt;/p&gt;

&lt;h2 id=&quot;manageability&quot;&gt;Manageability&lt;/h2&gt;

&lt;p&gt;Kubernetes doesn’t promise simplicity — it promises &lt;strong&gt;control&lt;/strong&gt;.&lt;br /&gt;
It lets you describe &lt;em&gt;how your system should look&lt;/em&gt;, and then it handles everything else: placement, recovery, scaling, networking, and updates.&lt;/p&gt;

&lt;p&gt;Docker was the first step.&lt;br /&gt;
Kubernetes is the next level of infrastructure maturity.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Containers made development easier.&lt;br /&gt;
Kubernetes makes production predictable.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;hr /&gt;
&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Services, support, and tools are widely available.&lt;br /&gt; &lt;a href=&quot;https://andygol-k8s.netlify.app/docs/concepts/overview/&quot;&gt;https://andygol-k8s.netlify.app/docs/concepts/overview/&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;
      </content>
      <author>
        <name>Andrii Holovin</name>
      </author>

      
        <category term="Docker"/>
        <category term="Kubernetes"/>
        <category term="Containers"/>
        <category term="Orchestration"/>
        <category term="Scaling"/>
        <category term="DevOps"/>
        <category term="Infrastructure"/>
        <category term="Automation"/>
        <category term="Networking"/>
        <summary type="html">Wow, you’ve finally done it! After days or even weeks of work — your app is fully Dockerized.</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Minute Changes in OpenStreetMap Aren’t That Minute</title>
      <link href="https://blog.andygol.co.ua/en/2025/05/08/osm-minutes-diffs/" rel="alternate" type="text/html" title="Minute Changes in OpenStreetMap Aren’t That Minute"/>
      <published>2025-05-08T08:30:00+00:00</published>
      <updated>2025-05-08T08:30:00+00:00</updated>
      <id>https://blog.andygol.co.ua/en/2025/05/08/osm-minutes-diffs</id>
      <content type="html" xml:base="https://blog.andygol.co.ua/en/2025/05/08/osm-minutes-diffs/">
        &lt;p&gt;OpenStreetMap is a great example of a collaborative project for building an open geospatial data repository. Collecting the data is only part of the process. It’s collected so that it can be used. You can obtain data for the current map area using the &lt;a href=&quot;https://www.openstreetmap.org/export&quot;&gt;Export&lt;/a&gt; menu.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.openstreetmap.org/export#map=12/48.4680/34.9894&quot;&gt;&lt;img src=&quot;/images/2025/05/2025-05-08-osm-export-tab-12-uk.png&quot; alt=&quot;Dnipro, Ukraine. OpenStreetMap, z12&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, the ability to download data for the current area has some limitations. You won’t be able to do it if you are viewing the map at zoom level z11 or lower 👇. See the message in the yellow rectangle?&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.openstreetmap.org/export#map=11/48.4681/34.9894&quot;&gt;&lt;img src=&quot;/images/2025/05/2025-05-08-osm-export-tab-11-uk.png&quot; alt=&quot;Dnipro, Ukraine. OpenStreetMap, z11&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;planet-osm&quot;&gt;Planet OSM&lt;/h2&gt;

&lt;p&gt;If you need data for a larger area or even for the entire planet, head over to where the project offers data for download — the planet dump, &lt;a href=&quot;https://planet.openstreetmap.org/&quot;&gt;https://planet.openstreetmap.org/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://planet.osm.org&quot;&gt;&lt;img src=&quot;/images/2025/05/2025-05-08-planet-osm-org.png&quot; alt=&quot;planet.osm.org&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here you can download data for the whole planet, either as a &lt;a href=&quot;https://planet.openstreetmap.org/planet/&quot;&gt;data snapshot&lt;/a&gt; for a specific date or as a &lt;a href=&quot;https://planet.openstreetmap.org/planet/full-history/&quot;&gt;full history dump&lt;/a&gt;, which are generated weekly.&lt;/p&gt;

&lt;p&gt;This data is provided free of charge by the project, but you can &lt;a href=&quot;https://supporting.openstreetmap.org/&quot;&gt;support the project&lt;/a&gt; by donating or becoming a member of the OpenStreetMap Foundation.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://supporting.openstreetmap.org/donate/&quot;&gt;&lt;img src=&quot;/images/2025/05/2025-05-08-donate-osm.png&quot; alt=&quot;https://supporting.openstreetmap.org/donate/&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;diff-files&quot;&gt;Diff Files&lt;/h2&gt;

&lt;p&gt;For those less familiar with the project, diffs are &lt;a href=&quot;https://planet.openstreetmap.org/replication/&quot;&gt;sets of changes&lt;/a&gt; that occurred in the data and are published by OpenStreetMap every day, hour, and minute. These files list all changes made to the database within a selected time period.&lt;/p&gt;

&lt;p&gt;Developers can use these diffs to update their local copies of the database, which are then used for geocoding, map rendering, and other tools in near real-time.&lt;/p&gt;

&lt;p&gt;One of the key features that enables tracking changes in this massive database is the concept of “minute diffs”. At first glance, the name suggests almost instant updates of every change made. But is that really the case?
Sounds ideal, right? But there are a few nuances worth considering:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;“Near” real time&lt;/strong&gt; is the key phrase. Although change files are generated every minute, their actual availability may be delayed. Depending on server load, the generation and publication process may take a bit longer. So, while you may see changes within a few minutes, it won’t always be exactly one minute.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Processing diffs&lt;/strong&gt; is not instant. Getting the diff file is only the first step. Then the software must download, parse, and apply the changes to the local database copy. This process also takes some time, depending on the size of the change file and your hardware’s performance. During peak editing times on OpenStreetMap, these files can become significantly larger, slowing down the update process.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Applying the diffs&lt;/strong&gt; in the right order is essential. To ensure data consistency, changes must be applied in the correct order. This means if you miss several minute diffs (e.g., due to network issues), you’ll need to process them sequentially before your database is up to date again.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;using-diffs&quot;&gt;Using Diffs&lt;/h2&gt;

&lt;p&gt;You’ve obtained the planet dump and loaded it into your map rendering or routing service. That’s just the beginning. In today’s fast-paced world, information changes rapidly, and those who keep up with those changes will have a significant advantage.&lt;/p&gt;

&lt;p&gt;The planet dump file in OpenStreetMap is generated weekly.&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;As of this writing, May 8, 2025, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;planet-latest.osm.bz2&lt;/code&gt; was created on &lt;em&gt;2025-05-02 23:50&lt;/em&gt; and its size was 150G; you can also get the data in binary pbf format — &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;planet-latest.osm.pbf&lt;/code&gt;, &lt;em&gt;2025-05-02 23:50&lt;/em&gt;, 81G; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;planet-250428.osm.pbf&lt;/code&gt;, &lt;em&gt;2025-05-02 23:50&lt;/em&gt;, 81G.&lt;/p&gt;

  &lt;p&gt;This latest dump contains data as of &lt;em&gt;April 25, 2025&lt;/em&gt;, and was published on &lt;em&gt;May 2, 2025&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This means that at the time of publication, the dump is already 7 days out of date. Add a couple more days before you get around to downloading it, and your data may be 1–2 weeks outdated at the start of deployment. Then add the time needed to load the dump into your system and you might find that your data is already 3–5 weeks stale by the time you begin using it (when talking about the full-planet scale). To catch up and get current data, you should use replication files: &lt;a href=&quot;https://planet.openstreetmap.org/replication/&quot;&gt;https://planet.openstreetmap.org/replication/&lt;/a&gt;, our daily, hourly, and minute diffs.&lt;/p&gt;

&lt;h3 id=&quot;replication-process&quot;&gt;Replication Process&lt;/h3&gt;

&lt;p&gt;Each diff file is accompanied by an additional metadata file describing the diff — &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;state.txt&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/05/2025-05-08-planet-osm-state-txt.png&quot; alt=&quot;state.txt&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;state.txt&lt;/code&gt; file contains the timestamp and the sequence number of the diff. Using this number, you can retrieve the corresponding diff file (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;717.osc.gz&lt;/code&gt;, &lt;em&gt;2025-05-08 17:01&lt;/em&gt;, 101K). For convenience, the sequence number is split into triplets: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;006/590/717&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The latest diff information is available in the root of the respective replication type. For minute diffs, this is: &lt;a href=&quot;https://planet.openstreetmap.org/replication/minute/state.txt&quot;&gt;https://planet.openstreetmap.org/replication/minute/state.txt&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To synchronize your local data with the current OpenStreetMap data, you need to determine the last timestamp present in your local dump. For example, in a dump dated March 24, 2025, the last timestamp was &lt;em&gt;2025-03-24T00:59:53Z&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/05/2025-05-08-osmium-dump-info.png&quot; alt=&quot;osmium planet damp info&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Now that you have this timestamp, you need to find the corresponding diff file (daily or hourly) and its number. This can help:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;https://replicate-sequences.osm.mazdermind.de/?2013-01-01T10:00:00Z
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;curl &lt;span class=&quot;s2&quot;&gt;&quot;https://replicate-sequences.osm.mazdermind.de/?&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;date&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-u&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;@&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;stat&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;%Y&quot;&lt;/span&gt; planet-latest.osm.pbf&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; +&lt;span class=&quot;s2&quot;&gt;&quot;%FT%TZ&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;finding-diffs-manually&quot;&gt;Finding Diffs Manually&lt;/h3&gt;

&lt;p&gt;For daily and hourly diffs, you can also use next approach:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Convert the initial timestamp (from the dump) to Unix epoch time:&lt;/p&gt;

    &lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;date&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;2025-03-24T00:59:53Z&quot;&lt;/span&gt; +%s
&lt;span class=&quot;c&quot;&gt;# or explicitly specify the format&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;date&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-u&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-j&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;%Y-%m-%dT%H:%M:%SZ&quot;&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;2025-03-24T00:59:53Z&quot;&lt;/span&gt; +%s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;    &lt;/div&gt;

    &lt;p&gt;This gives you the Unix time in seconds: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;1742777993&lt;/code&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;Retrieve the latest &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;state.txt&lt;/code&gt; timestamp and sequence number (daily or hourly)&lt;/li&gt;
  &lt;li&gt;Calculate the time difference (in days or hours), subtract this value from the sequence number, and get the download link for the diff file from which to begin syncing your local data&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;DUMP_EPOCH_TS&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;date&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;2025-03-24T00:59:53Z&quot;&lt;/span&gt; +%s&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;REFERENCE_DATE&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;wget &lt;span class=&quot;nt&quot;&gt;-q&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-O&lt;/span&gt; - https://planet.openstreetmap.org/replication/day/state.txt 2&amp;gt;/dev/null | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;timestamp&quot;&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;cut&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;=&apos;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-f2&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;sed&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;s/T/ /;s/Z//; s/\\//g&apos;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;REFERENCE_SEQ&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;wget &lt;span class=&quot;nt&quot;&gt;-q&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-O&lt;/span&gt; - https://planet.openstreetmap.org/replication/day/state.txt 2&amp;gt;/dev/null | &lt;span class=&quot;nb&quot;&gt;grep&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;sequenceNumber&quot;&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;cut&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;=&apos;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-f2&lt;/span&gt; &lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;LAST_EPOCH_TS&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;date&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$REFERENCE_DATE&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; +%s&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Example for calculating difference in days (86400 seconds = 24 hours)&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Use 3600 instead for hourly diffs&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;DIFF_TS&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;$((&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$LAST_EPOCH_TS&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; DUMP_EPOCH_TS&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;/&lt;/span&gt; &lt;span class=&quot;m&quot;&gt;86400&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;))&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;TARGET_SEQ&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;$((&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;m&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;c&quot;&gt;#$REFERENCE_SEQ - $DIFF_TS) + 1 ))&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;seq_padded&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;printf&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;%09d&quot;&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$TARGET_SEQ&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;url&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;https://planet.openstreetmap.org/replication/day/&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;seq_padded&lt;/span&gt;:0:3&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;seq_padded&lt;/span&gt;:3:3&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;/&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;seq_padded&lt;/span&gt;:6:3&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;.state.txt&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;$url&lt;/code&gt; variable will contain a link to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;state.txt&lt;/code&gt; file needed to begin the replication process. This file is required for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;osmosis&lt;/code&gt; or similar tools. (For more details on the replication process, see: &lt;a href=&quot;https://wiki.openstreetmap.org/wiki/Planet.osm/diffs&quot;&gt;https://wiki.openstreetmap.org/wiki/Planet.osm/diffs&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;The approach described above works well for daily and hourly diffs but does not work for minute diffs.&lt;/p&gt;

&lt;h3 id=&quot;issues-with-minute-diff-calculations&quot;&gt;Issues with Minute Diff Calculations&lt;/h3&gt;

&lt;p&gt;The approach described above for calculating the diff file number to begin replication does not work for minute diffs. Why? Because minute diffs can take more than 60 seconds to generate, resulting in “slippage” in the timeline.&lt;/p&gt;

&lt;p&gt;For example, if you take the range of minute diffs from 6590000 to 6590227, you get 233 minutes but only 227 diff files.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/05/2025-05-08-osm-minutes-diffs.png&quot; alt=&quot;osm-diffs&quot; /&gt;&lt;/p&gt;

&lt;p&gt;On the charts, you can see how long it took to generate each minute diff and which minutes they were created in. You can observe that at 7:59, 8:02, and 8:05 no diffs were created — slippage occurred.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Sequence&lt;/th&gt;
      &lt;th&gt;Timestamp&lt;/th&gt;
      &lt;th&gt;Epoch,&lt;br /&gt;seconds&lt;/th&gt;
      &lt;th&gt;Epoch,&lt;br /&gt;minutes&lt;/th&gt;
      &lt;th&gt;Time, &lt;br /&gt; sec&lt;/th&gt;
      &lt;th&gt;Clock&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;6590&lt;strong&gt;194&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2025-05-08 8:12:08&lt;/td&gt;
      &lt;td&gt;1746681128&lt;/td&gt;
      &lt;td&gt;29111352&lt;/td&gt;
      &lt;td&gt;65&lt;/td&gt;
      &lt;td&gt;8:12&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;6590&lt;strong&gt;193&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2025-05-08 8:11:05&lt;/td&gt;
      &lt;td&gt;1746681065&lt;/td&gt;
      &lt;td&gt;29111351&lt;/td&gt;
      &lt;td&gt;63&lt;/td&gt;
      &lt;td&gt;8:11&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;6590&lt;strong&gt;192&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2025-05-08 8:10:01&lt;/td&gt;
      &lt;td&gt;1746681001&lt;/td&gt;
      &lt;td&gt;29111350&lt;/td&gt;
      &lt;td&gt;64&lt;/td&gt;
      &lt;td&gt;8:10&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;6590&lt;strong&gt;191&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2025-05-08 8:09:01&lt;/td&gt;
      &lt;td&gt;1746680941&lt;/td&gt;
      &lt;td&gt;29111349&lt;/td&gt;
      &lt;td&gt;60&lt;/td&gt;
      &lt;td&gt;8:09&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;6590&lt;strong&gt;190&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2025-05-08 8:08:01&lt;/td&gt;
      &lt;td&gt;1746680881&lt;/td&gt;
      &lt;td&gt;29111348&lt;/td&gt;
      &lt;td&gt;60&lt;/td&gt;
      &lt;td&gt;8:08&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;6590&lt;strong&gt;189&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2025-05-08 8:07:02&lt;/td&gt;
      &lt;td&gt;1746680822&lt;/td&gt;
      &lt;td&gt;29111347&lt;/td&gt;
      &lt;td&gt;59&lt;/td&gt;
      &lt;td&gt;8:07&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;6590&lt;strong&gt;188&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2025-05-08 &lt;strong&gt;8:06:00&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;1746680760&lt;/td&gt;
      &lt;td&gt;29111346&lt;/td&gt;
      &lt;td&gt;62&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;8:06&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;6590&lt;strong&gt;187&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2025-05-08 8:04:59&lt;/td&gt;
      &lt;td&gt;1746680699&lt;/td&gt;
      &lt;td&gt;29111344&lt;/td&gt;
      &lt;td&gt;61&lt;/td&gt;
      &lt;td&gt;8:04&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;6590&lt;strong&gt;186&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2025-05-08 8:04:00&lt;/td&gt;
      &lt;td&gt;1746680640&lt;/td&gt;
      &lt;td&gt;29111344&lt;/td&gt;
      &lt;td&gt;59&lt;/td&gt;
      &lt;td&gt;8:04&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;6590&lt;strong&gt;185&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2025-05-08 &lt;strong&gt;8:03:00&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;1746680580&lt;/td&gt;
      &lt;td&gt;29111343&lt;/td&gt;
      &lt;td&gt;60&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;8:03&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;6590&lt;strong&gt;184&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2025-05-08 8:01:59&lt;/td&gt;
      &lt;td&gt;1746680519&lt;/td&gt;
      &lt;td&gt;29111341&lt;/td&gt;
      &lt;td&gt;61&lt;/td&gt;
      &lt;td&gt;8:01&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;6590&lt;strong&gt;183&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2025-05-08 8:00:58&lt;/td&gt;
      &lt;td&gt;1746680458&lt;/td&gt;
      &lt;td&gt;29111340&lt;/td&gt;
      &lt;td&gt;61&lt;/td&gt;
      &lt;td&gt;8:00&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;6590&lt;strong&gt;182&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2025-05-08 &lt;strong&gt;8:00:00&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;1746680400&lt;/td&gt;
      &lt;td&gt;29111340&lt;/td&gt;
      &lt;td&gt;58&lt;/td&gt;
      &lt;td&gt;&lt;strong&gt;8:00&lt;/strong&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;6590&lt;strong&gt;181&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2025-05-08 7:58:56&lt;/td&gt;
      &lt;td&gt;1746680336&lt;/td&gt;
      &lt;td&gt;29111338&lt;/td&gt;
      &lt;td&gt;64&lt;/td&gt;
      &lt;td&gt;7:58&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;6590&lt;strong&gt;180&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2025-05-08 7:57:51&lt;/td&gt;
      &lt;td&gt;1746680271&lt;/td&gt;
      &lt;td&gt;29111337&lt;/td&gt;
      &lt;td&gt;65&lt;/td&gt;
      &lt;td&gt;7:57&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;This table also shows that there were 3 ‘slippages’ within 15 minutes.&lt;/p&gt;

&lt;p&gt;Is this critical for everyday use? Since this has been happening for a long time, it seems that it is not. Existing change handling tools don’t care about this. Yes, it can cause excessive traffic if you go further back in time than necessary for minute diffs, but it won’t affect the final result. You will get your own dataset that will be synchronised with OpenStreetMap data with a lag of up to 2 minutes.&lt;/p&gt;

&lt;h3 id=&quot;can-this-be-fixed-maybe&quot;&gt;Can this be fixed? Maybe.&lt;/h3&gt;

&lt;p&gt;Processing time-based data is always a non-trivial task. To fix this situation:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;timestamp&lt;/code&gt; field of the diff metadata should contain the &lt;strong&gt;start&lt;/strong&gt; time (the moment in time when the diff creation process started), which should ideally be an integer (or very close to it) time in minutes, hours, and days:
    &lt;ul&gt;
      &lt;li&gt;13:00:00, 13:01:00, 13:02:00 for minutes;&lt;/li&gt;
      &lt;li&gt;14:00:00, 15:00:00 - for hours; and&lt;/li&gt;
      &lt;li&gt;00:00:00 for days.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;The end time of the diff creation process will be the time of the modification of the file itself, although it can also be specified in the metadata of the change set (this is the time currently contained in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;timestamp&lt;/code&gt;).&lt;/li&gt;
  &lt;li&gt;The creation of the next diff should start at the specified time, regardless of whether the previous diff has finished. For some time, the process of creating them can take place in parallel.&lt;/li&gt;
  &lt;li&gt;If there are no changes, create an empty diff file to maintain a monotonous order of change numbering so that there are no gaps in the time series.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are annoyed with this situation, you can submit suggestions for fixing it to the project maintainers, and it may be fixed. It’s hard to say how soon.&lt;/p&gt;

&lt;h2 id=&quot;recap&quot;&gt;Recap&lt;/h2&gt;

&lt;p&gt;Diffs remain an extremely valuable tool for keeping local copies of OpenStreetMap data up to date. However, it is important to understand their limitations and not to take the name literally, especially with minute diffs.&lt;/p&gt;

&lt;p&gt;For developers who use minute diffs, this means that they need to have a reliable infrastructure for processing them, take into account possible delays, and be prepared to process large amounts of data during periods of active editing.&lt;/p&gt;

&lt;p&gt;For ordinary users, this is just an interesting fact about how the open source map they love works behind the scenes. The next time you make a small edit and don’t see it on your map server instantly, don’t worry — your ‘minute’ diff might just be on its way!&lt;/p&gt;

&lt;p&gt;What do you think of minute diffs? Have you experienced any delays in data updates? Share your experience in the comments!&lt;/p&gt;
      </content>
      <author>
        <name>Andrii Holovin</name>
      </author>

      
        <category term="OpenStreetMap"/>
        <category term="Diffs"/>
        <category term="Replication"/>
        <category term="Data"/>
        <summary type="html">OpenStreetMap is a great example of a collaborative project for building an open geospatial data repository. Collecting the data is only part of the process. It’s collected so that it can be used. You can obtain data for the current map area using the Export menu.</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Getting access to the host file system for Persistent Volume in Kind</title>
      <link href="https://blog.andygol.co.ua/en/2025/04/05/host-fs-to-backup-pv-in-kind/" rel="alternate" type="text/html" title="Getting access to the host file system for Persistent Volume in Kind"/>
      <published>2025-04-05T08:30:00+00:00</published>
      <updated>2025-04-05T08:30:00+00:00</updated>
      <id>https://blog.andygol.co.ua/en/2025/04/05/host-fs-to-backup-pv-in-kind</id>
      <content type="html" xml:base="https://blog.andygol.co.ua/en/2025/04/05/host-fs-to-backup-pv-in-kind/">
        &lt;p&gt;Kubernetes is no longer something that only platform engineers work with. More and more applications are wrapped in containers and run in container environments. What if, for some reason, you don’t have access to a cloud platform but still need to develop an application that will run in the cloud? You can use &lt;a href=&quot;https://kind.sigs.k8s.io&quot;&gt;Kind&lt;/a&gt; to deploy &lt;a href=&quot;https://andygol-k8s.netlify.app/en/docs/concepts/overview/&quot;&gt;Kubernetes&lt;/a&gt; locally.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;a href=&quot;https://kind.sigs.k8s.io&quot;&gt;Kind&lt;/a&gt; is a tool for running local Kubernetes clusters using Docker container “nodes”. It was originally designed for testing Kubernetes itself, but it can also be used for local development or CI.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To run Kind, you’ll need &lt;a href=&quot;https://www.docker.com/&quot;&gt;docker&lt;/a&gt;, &lt;a href=&quot;https://podman.io/&quot;&gt;podman&lt;/a&gt;, or another container engine. Refer to the &lt;a href=&quot;https://kind.sigs.k8s.io/docs/user/quick-start/&quot;&gt;Kind Quick Start Guide&lt;/a&gt; on the official site.&lt;/p&gt;

&lt;h2 id=&quot;creating-a-cluster&quot;&gt;Creating a Cluster&lt;/h2&gt;

&lt;p&gt;So, we’ve installed Kind and a container runtime. We also have &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kubectl&lt;/code&gt; – the &lt;a href=&quot;https://andygol-k8s.netlify.app/en/docs/reference/kubectl/&quot;&gt;command-line tool&lt;/a&gt; for interacting with the cluster.&lt;/p&gt;

&lt;p&gt;To create a local cluster, use the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kind create cluster&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/04/2025-04-05-kind-create-cluster.png&quot; alt=&quot;kind create cluster&quot; /&gt;&lt;/p&gt;

&lt;p&gt;You can always get the necessary help by using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kind [command] --help&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2025/04/2025-04-05-kind-help.png&quot; alt=&quot;kind --help&quot; /&gt;&lt;/p&gt;

&lt;p&gt;We’ve created a local cluster named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;kind-kind&lt;/code&gt;. This is the default name Kind uses if the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-n cluster_name&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--name cluster_name&lt;/code&gt; parameter is not specified.&lt;/p&gt;

&lt;p&gt;Instead of just passing CLI flags, you can also use a manifest to create a cluster with the desired parameters.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &amp;gt; kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: my-super-cluster
nodes:
- role: control-plane
- role: worker
  extraMounts:
  - hostPath: /path/to/local/data
    containerPath: /data
# - role: worker
# - role: worker
#   extraMounts:
#   - hostPath: /path/to/local/data/dump
#     containerPath: /data/dump
#   - hostPath: /path/to/local/data/diff
#     containerPath: /data/diff
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;☝️ Here you can specify the number of desired nodes, their role, and—most importantly in our case—the local file system path on the host to be mounted into the cluster nodes and used as backing storage for our Persistent Volumes. See &lt;a href=&quot;https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts&quot;&gt;Extra Mounts&lt;/a&gt; in the Kind documentation.&lt;/p&gt;

&lt;p&gt;Let’s apply our config to create the cluster
&lt;a name=&quot;create-cluster&quot;&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kind create cluster &lt;span class=&quot;nt&quot;&gt;--config&lt;/span&gt; kind-config.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;Creating cluster &quot;kind&quot; ...
 ✓ Ensuring node image (kindest/node:v1.32.2) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to &quot;kind-my-super-cluster&quot;
You can now use your cluster with:

kubectl cluster-info --context kind-my-super-cluster

&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#&lt;/span&gt;community 🙂
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Let’s verify that the host file system is mounted in the worker node of our cluster.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;docker container inspect osm-cluster-worker &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  | jq &lt;span class=&quot;s1&quot;&gt;&apos;[{&quot;Name&quot;: .[0].Name,
          &quot;BindMounts&quot;: (
            .[] |
            .Mounts[] |
            select(.Type == &quot;bind&quot;)
        )}]&apos;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And we see everything is OK—the file system is mounted.&lt;/p&gt;

&lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;/my-super-cluster-worker&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;BindMounts&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Type&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;bind&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Source&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;/host_mnt/path/to/local/data&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Destination&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;/data&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Mode&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;RW&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;kc&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Propagation&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;rprivate&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;},&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;/my-super-cluster-worker&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;BindMounts&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Type&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;bind&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Source&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;/lib/modules&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Destination&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;/lib/modules&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Mode&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;ro&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;RW&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;kc&quot;&gt;false&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Propagation&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;rprivate&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;creating-a-persistentvolume-and-persistentvolumeclaim&quot;&gt;Creating a PersistentVolume and PersistentVolumeClaim&lt;/h2&gt;

&lt;p&gt;Let’s define a manifest for our Persistent Volume:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;PersistentVolume&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;my-super-cluster-pv&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;capacity&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;storage&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;100Gi&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;accessModes&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;volumeMode&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Filesystem&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;hostPath&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;/data&quot;&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;storageClassName&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;my-storageclass&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We’ll also create a PersistentVolumeClaim to mount the PV in workloads:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;apiVersion&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;v1&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;metadata&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;  &lt;span class=&quot;s&quot;&gt;my-super-cluster-pvc&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;spec&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;accessModes&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;resources&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;requests&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;storage&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;100Gi&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;storageClassName&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;my-storageclass&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now the most important part 🥁—creating a StorageClass that explicitly links the PVC to the PV.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Kind &lt;a href=&quot;#create-cluster&quot;&gt;creates&lt;/a&gt; a default StorageClass when the cluster is created, but it has &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;reclaimPolicy: Delete&lt;/code&gt;, which is not what we want.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get storageclass
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
standard (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  80m
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This means the contents of the volume will be deleted once it is unmounted—something we want to avoid.&lt;/p&gt;

&lt;p&gt;Let’s define our own StorageClass:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; -  &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: my-storageclass
provisioner: rancher.io/local-path
parameters:
  nodePath: /data
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;storageclass.storage.k8s.io/my-storageclass created
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;and check it:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get storageclass
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
my-storageclass      rancher.io/local-path   Retain          WaitForFirstConsumer   false                  5m27s
standard (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  91m
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Make it the default, just in case:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl patch storageclass my-storageclass &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;{&quot;metadata&quot;: {&quot;annotations&quot;:{&quot;storageclass.kubernetes.io/is-default-class&quot;:&quot;true&quot;}}}&apos;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And make &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;standard&lt;/code&gt; non-default:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl patch storageclass standard &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;{&quot;metadata&quot;: {&quot;annotations&quot;:{&quot;storageclass.kubernetes.io/is-default-class&quot;:&quot;false&quot;}}}&apos;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Check the result:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get storageclass
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;NAME                        PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
my-storageclass (default)   rancher.io/local-path   Retain          WaitForFirstConsumer   false                  12m
standard                    rancher.io/local-path   Delete          WaitForFirstConsumer   false                  98m
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;using-the-persistentvolumeclaim-in-a-pod&quot;&gt;Using the PersistentVolumeClaim in a Pod&lt;/h2&gt;

&lt;p&gt;Apply the PV and PVC manifests to the cluster:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; pv.yaml &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; pvc.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now create a pod that uses the PVC:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; - &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
apiVersion: v1
kind: Pod
metadata:
  name: debug-pod
spec:
  containers:
  - name: debug-container
    image: busybox:latest
    command: [&quot;sh&quot;, &quot;-c&quot;, &quot;sleep 3600&quot;]
    volumeMounts:
    - mountPath: &quot;/data&quot;
      name: my-super-cluster
  volumes:
  - name: my-super-cluster
    persistentVolumeClaim:
      claimName: my-super-cluster-pvc
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Check that the PVC is bound to the PV and used by our test pod:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get pv
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;NAME                  CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                          STORAGECLASS      VOLUMEATTRIBUTESCLASS   REASON   AGE
&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;my-super-cluster-pv   100Gi      RWO            Retain           Bound    default/my-super-cluster-pvc   my-storageclass   &amp;lt;unset&amp;gt;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;                          &lt;/span&gt;9m20s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get pvc
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;NAME                   STATUS   VOLUME                CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;my-super-cluster-pvc   Bound    my-super-cluster-pv   100Gi      RWO            my-storageclass   &amp;lt;unset&amp;gt;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;                 &lt;/span&gt;8m48s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Bound&lt;/code&gt; status confirms that the PVC has successfully bound to the PV.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl describe pod
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;Name:             debug-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             kind-control-plane/172.20.0.4
Start Time:       Fri, 04 Apr 2025 18:17:09 +0300
&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;Labels:           &amp;lt;none&amp;gt;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;Annotations:      &amp;lt;none&amp;gt;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;go&quot;&gt;Status:           Running
IP:               10.244.0.5
IPs:
  IP:  10.244.0.5
Containers:
  debug-container:
    Container ID:  containerd://d030a6edfc13c314853f22efc505990bbbb8e3954ed1c9887b9c7b3be575a0be
    Image:         busybox:latest
    Image ID:      docker.io/library/busybox@sha256:37f7b378a29ceb4c551b1b5582e27747b855bbfaa73fa11914fe0df028dc581f
&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;    Port:          &amp;lt;none&amp;gt;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;    Host Port:     &amp;lt;none&amp;gt;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;go&quot;&gt;    Command:
      sh
      -c
      sleep 3600
    State:          Running
      Started:      Fri, 04 Apr 2025 18:17:13 +0300
    Ready:          True
    Restart Count:  0
&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;    Environment:    &amp;lt;none&amp;gt;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;go&quot;&gt;    Mounts:
      /data from my-super-cluster (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5wdzj (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  my-super-cluster:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  my-super-cluster-pvc
    ReadOnly:   false
  kube-api-access-5wdzj:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;    ConfigMapOptional:       &amp;lt;nil&amp;gt;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;go&quot;&gt;    DownwardAPI:             true
QoS Class:                   BestEffort
&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;Node-Selectors:              &amp;lt;none&amp;gt;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;go&quot;&gt;Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  8m36s  default-scheduler  Successfully assigned default/debug-pod to kind-control-plane
  Normal  Pulling    8m36s  kubelet            Pulling image &quot;busybox:latest&quot;
  Normal  Pulled     8m32s  kubelet            Successfully pulled image &quot;busybox:latest&quot; in 3.395s (3.395s including waiting). Image size: 1855985 bytes.
  Normal  Created    8m32s  kubelet            Created container: debug-container
  Normal  Started    8m32s  kubelet            Started container debug-container
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Our pod has been successfully created and is running.&lt;/p&gt;

&lt;p&gt;Access the pod’s terminal and verify that the volume is mounted and functioning:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-it&lt;/span&gt; debug-pod &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gp&quot;&gt;/ #&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;ls&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-l&lt;/span&gt; / | &lt;span class=&quot;nb&quot;&gt;grep &lt;/span&gt;data
&lt;span class=&quot;go&quot;&gt;drwxr-xr-x    2 root     root          4096 Apr  4 15:17 data
&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;/ #&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;touch&lt;/span&gt; /data/somefile.txt
&lt;span class=&quot;gp&quot;&gt;/ #&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;ls&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-l&lt;/span&gt; /data
&lt;span class=&quot;go&quot;&gt;total 0
-rw-r--r--    1 root     root             0 Apr  4 15:31 somefile.txt
&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;/ #&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;/ #&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;exit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now check the host file system mounted into the cluster node—you should see the newly created &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;somefile.txt&lt;/code&gt;.&lt;/p&gt;

&lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;p&gt;We’ve created a Persistent Volume Claim to use in a workload, bound to a Persistent Volume via a custom StorageClass. The PV uses the file system of a cluster node, which in turn maps to the host file system.&lt;/p&gt;

&lt;p&gt;This setup allows us to reliably store and reuse data in Persistent Volumes across workloads—even though workloads have an inherently &lt;a href=&quot;https://andygol-k8s.netlify.app/en/docs/concepts/workloads/pods/pod-lifecycle/&quot;&gt;ephemeral lifecycle&lt;/a&gt;. It also allows us to preload data from the host file system and make it available to pods.&lt;/p&gt;

&lt;h2 id=&quot;cleanup&quot;&gt;Cleanup&lt;/h2&gt;

&lt;p&gt;To delete the cluster, run:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kind delete cluster &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; kind-my-super-cluster
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;go&quot;&gt;Deleting cluster &quot;kind-my-super-cluster&quot; ...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Wait for Kind to delete the cluster. If needed, manually remove the mounted files from the host file system.&lt;/p&gt;

&lt;h2 id=&quot;further-reading&quot;&gt;Further Reading&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://kind.sigs.k8s.io/docs/user/quick-start/&quot;&gt;Kind Quick Start&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://mauilion.dev/posts/kind-pvc/&quot;&gt;Kind Persistent Volumes&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/rancher/local-path-provisioner#storage-classes&quot;&gt;Rancher Local Path Provisioner&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://andygol-k8s.netlify.app/en/docs/concepts/storage/volumes/&quot;&gt;Volumes&lt;/a&gt;, &lt;a href=&quot;https://andygol-k8s.netlify.app/en/docs/concepts/storage/persistent-volumes/&quot;&gt;Persistent Volumes&lt;/a&gt;, &lt;a href=&quot;https://andygol-k8s.netlify.app/en/docs/concepts/storage/storage-classes/&quot;&gt;Storage Classes&lt;/a&gt; in Kubernetes&lt;/li&gt;
&lt;/ul&gt;
      </content>
      <author>
        <name>Andrii Holovin</name>
      </author>

      
        <category term="Kubernetes"/>
        <category term="Persistent Volumes"/>
        <category term="Node"/>
        <category term="Host"/>
        <summary type="html">Kubernetes is no longer something that only platform engineers work with. More and more applications are wrapped in containers and run in container environments. What if, for some reason, you don’t have access to a cloud platform but still need to develop an application that will run in the cloud? You can use Kind to deploy Kubernetes locally.</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Deploy a container from the ghcr.io private registry to Amazon ECS</title>
      <link href="https://blog.andygol.co.ua/en/2024/06/26/deploy-container-from-the-ghcr-private-registry-to-ecs/" rel="alternate" type="text/html" title="Deploy a container from the ghcr.io private registry to Amazon ECS"/>
      <published>2024-06-26T08:30:00+00:00</published>
      <updated>2024-06-26T08:30:00+00:00</updated>
      <id>https://blog.andygol.co.ua/en/2024/06/26/deploy-container-from-the-ghcr-private-registry-to-ecs</id>
      <content type="html" xml:base="https://blog.andygol.co.ua/en/2024/06/26/deploy-container-from-the-ghcr-private-registry-to-ecs/">
        &lt;p&gt;Everyone reaches a point where you need to deploy a container from a private repository to ECS. Amazon’s documentation isn’t the epitome of clarity, so I’ve prepared a guide for you (and primarily for myself) on what needs to be done.&lt;/p&gt;

&lt;h2 id=&quot;prerequisites&quot;&gt;Prerequisites&lt;/h2&gt;

&lt;h3 id=&quot;container-in-ghcrio&quot;&gt;Container in ghcr.io&lt;/h3&gt;

&lt;p&gt;First, you need to upload your container to the &lt;a href=&quot;https://ghcr.io/&quot;&gt;GitHub Container Registry&lt;/a&gt;. This can be a public or private repository.&lt;/p&gt;

&lt;p&gt;Log in to GitHub and navigate to the repository page where your container is located. In the &lt;strong&gt;Packages&lt;/strong&gt; section, select the artifact that is your previously created container.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024/06/ghcr-to-ecs-container.png&quot; alt=&quot;GitHub Container Registry&quot; /&gt;&lt;/p&gt;

&lt;p&gt;To access your registry, you will need a Private Access Token (PAT). To create a PAT, go to your account settings.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024/06/ghcr-to-ecs-profile-settings.png&quot; alt=&quot;GitHub Profile Settings&quot; /&gt;&lt;/p&gt;

&lt;p&gt;At the bottom of the page, select &lt;strong&gt;Developer settings&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024/06/ghcr-to-ecs-profile-dev-settings.png&quot; alt=&quot;GitHub Developer Settings&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Next, select &lt;strong&gt;Personal access tokens&lt;/strong&gt;/&lt;strong&gt;Tokens (classic)&lt;/strong&gt;/&lt;strong&gt;Generate new token (classic)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024/06/ghcr-to-ecs-profile-token-classic.png&quot; alt=&quot;GitHub Personal Access Tokens&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Alternatively, follow this link to create a PAT — &lt;a href=&quot;https://github.com/settings/tokens/new&quot;&gt;https://github.com/settings/tokens/new&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Add a note if needed, choose the expiration date for the token, and specify the required permissions — in this case, we only need permissions to fetch packages (&lt;strong&gt;read:packages&lt;/strong&gt; — Download packages from GitHub Package Registry). At the bottom of the page, click &lt;strong&gt;Generate token&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024/06/ghcr-to-ecs-profile-new-token.png&quot; alt=&quot;GitHub New Personal Access Token&quot; /&gt;&lt;/p&gt;

&lt;p&gt;After that, you will see a page with your new token. Copy it and save it in a secure place. We will use it to access the registry.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024/06/ghcr-to-ecs-profile-created-token.png&quot; alt=&quot;GitHub Personal Access Token&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;ecs&quot;&gt;ECS&lt;/h3&gt;

&lt;p&gt;You should have an AWS account set up, optionally with &lt;a href=&quot;https://aws.amazon.com/cli/&quot;&gt;AWS CLI&lt;/a&gt; and &lt;a href=&quot;https://github.com/aws/amazon-ecs-cli&quot;&gt;Amazon ECS CLI&lt;/a&gt; installed locally. If you don’t want to install them locally, you can use &lt;a href=&quot;https://aws.amazon.com/cloudshell/&quot;&gt;AWS CloudShell&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;creating-a-cmk-in-aws-kms&quot;&gt;Creating a CMK in AWS KMS&lt;/h2&gt;

&lt;p&gt;First, we need to create a &lt;a href=&quot;https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#kms_keys&quot;&gt;CMK&lt;/a&gt; (Customer Master Key) and an alias for it in &lt;a href=&quot;https://aws.amazon.com/kms/&quot;&gt;AWS KMS&lt;/a&gt;. The CMK is used by &lt;a href=&quot;https://docs.aws.amazon.com/kms/latest/developerguide/services-secrets-manager.html&quot;&gt;AWS Secret Manager&lt;/a&gt; for &lt;a href=&quot;https://en.wikipedia.org/wiki/Hybrid_cryptosystem#Envelope_encryption&quot;&gt;envelope encryption&lt;/a&gt; of data containing sensitive information. An alias acts as a name for your CMK, making it easier to remember and use than the key ID itself. You can also use the alias in your code. You can change the key in the future (to another one) while keeping the alias unchanged.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;aws kms create-key &lt;span class=&quot;nt&quot;&gt;--query&lt;/span&gt; KeyMetadata.Arn &lt;span class=&quot;nt&quot;&gt;--output&lt;/span&gt; text
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In response, you will receive the key ID in the form of an Amazon Resource Name (&lt;a href=&quot;https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html&quot;&gt;ARN&lt;/a&gt;):&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-none&quot;&gt;arn:aws:kms:eu-central-1:123456789012:key/abc123de-4567-89fa-0bcd-efgh12345678
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now we will create an alias for our key:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;aws kms create-alias &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--alias-name&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;alias&lt;/span&gt;/ecs-ghcr &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--target-key-id&lt;/span&gt; arn:aws:kms:eu-central-1:123456789012:key/abc123de-4567-89fa-0bcd-efgh12345678
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If you haven’t set up AWS CLI locally, you can use CloudShell and run all the commands in the command line there.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024/06/ghcr-to-ecs-cloudshel.png&quot; alt=&quot;AWS CloudShell&quot; /&gt;&lt;/p&gt;

&lt;p&gt;You can create the CMK in the AWS Console by going to &lt;a href=&quot;https://eu-central-1.console.aws.amazon.com/kms/home?region=eu-central-1#/kms/keys&quot;&gt;AWS KMS&lt;/a&gt; and clicking the &lt;strong&gt;Create key&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024/06/ghcr-to-ecs-kms-create-key.png&quot; alt=&quot;AWS KMS Create Key&quot; /&gt;&lt;/p&gt;

&lt;p&gt;You will need the ARN of the CMK when creating the trust policy document in the next step.&lt;/p&gt;

&lt;h2 id=&quot;creating-a-secret-in-aws-secrets-manager&quot;&gt;Creating a Secret in AWS Secrets Manager&lt;/h2&gt;

&lt;p&gt;At this stage, we need to create a Secret that will store your login and password (access code) encrypted with the CMK for pulling your container image from a private registry.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;aws secretsmanager create-secret &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; ghcr_io_pat &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--description&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Secret to get packages from ghcr.io&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--kms-key-id&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;alias&lt;/span&gt;/ecs-ghcr &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--secret-string&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;{&quot;username&quot;:&quot;your_nickname&quot;, &quot;password&quot;:&quot;ghp_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&quot;}&apos;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You should receive the following in response:&lt;/p&gt;

&lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;ARN&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;arn:aws:secretsmanager:eu-central-1:123456789012:secret:ghcr_io_pat-abcdEF&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;ghcr_io_pat&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;VersionId&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;4b43b832-df4c-48b3-b59a-bb18287e6c15&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The ARN of the secret should be in the output ☝️ of the previous command — in the ARN field. You will need to refer to this ARN when creating the trust policy document in the next step.&lt;/p&gt;

&lt;h2 id=&quot;creating-an-iam-role-for-task-execution&quot;&gt;Creating an IAM Role for Task Execution&lt;/h2&gt;

&lt;p&gt;If you already have the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ecsTaskExecutionRole&lt;/code&gt;, you can skip this step.&lt;/p&gt;

&lt;p&gt;First, you need to create a trust policy document to specify the principal that will assume the role, which in this case is ECS task:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &amp;gt; ecs-trust-policy.json
{
    &quot;Version&quot;: &quot;2012-10-17&quot;,
    &quot;Statement&quot;: [
        {
            &quot;Effect&quot;: &quot;Allow&quot;,
            &quot;Principal&quot;: {
                &quot;Service&quot;: &quot;ecs-tasks.amazonaws.com&quot;
            },
            &quot;Action&quot;: &quot;sts:AssumeRole&quot;
        }
    ]
}
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Create the role using the AWS CLI and the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ecs-trust-policy.json&lt;/code&gt; file with the role description.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;aws iam create-role &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--role-name&lt;/span&gt; ecsTaskExecutionRole &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--assume-role-policy-document&lt;/span&gt; file://ecs-trust-policy.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;To add basic permissions to other AWS service resources needed to run Amazon ECS tasks, attach the AWS ECS task execution role policy to the newly created role:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;aws iam attach-role-policy &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--role-name&lt;/span&gt; ecsTaskExecutionRole &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--policy-arn&lt;/span&gt; arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now create a permissions policy document that allows the ECS task to decrypt and retrieve the secret created in AWS Secrets Manager.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &amp;gt; ecs-secret-permission.json 
{
  &quot;Version&quot;: &quot;2012-10-17&quot;,
  &quot;Statement&quot;: [
    {
      &quot;Effect&quot;: &quot;Allow&quot;,
      &quot;Action&quot;: [
        &quot;kms:Decrypt&quot;,
        &quot;secretsmanager:GetSecretValue&quot;
      ],
      &quot;Resource&quot;: [
        &quot;arn:aws:secretsmanager:eu-central-1:123456789012:secret:ghcr_io_pat-abcdEF&quot;,
        &quot;arn:aws:kms:eu-central-1:123456789012:key/abc123de-4567-89fa-0bcd-efgh12345678&quot;
      ]
    }
  ]
}
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;☝️ Specify your Secret and CMK in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&quot;Resource&quot;: [ &amp;lt;Secret&amp;gt;, &amp;lt;CMK&amp;gt; ]&lt;/code&gt; values.&lt;/p&gt;

&lt;p&gt;Finally, add the inline permissions policy that allows your task to fetch ghcr.io username and password from AWS Secrets Manager. Note that you refer to the permissions policy document created in the previous step. Change the file path as needed to specify the correct file location:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;aws iam put-role-policy &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--role-name&lt;/span&gt; ecsTaskExecutionRole &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--policy-name&lt;/span&gt; ECS-SecretsManager-Permission &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--policy-document&lt;/span&gt; file://ecs-secret-permission.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;configuring-ecs-cli-optional&quot;&gt;Configuring ECS CLI (Optional)&lt;/h2&gt;

&lt;p&gt;The Amazon ECS Command Line Interface (ECS CLI) provides commands to simplify the creation of an Amazon ECS cluster and the AWS resources required to set it up. After installing the ECS CLI, you can further configure your AWS credentials in a named ECS profile. Profiles are stored in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~/.ecs/credentials&lt;/code&gt; file.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ecs-cli configure profile &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--access-key&lt;/span&gt; &amp;lt;AWS_ACCESS_KEY_ID&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--secret-key&lt;/span&gt; &amp;lt;AWS_SECRET_ACCESS_KEY&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--profile-name&lt;/span&gt; &amp;lt;PROFILE_NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can also specify a default profile to use for all ECS CLI commands:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ecs-cli configure profile default &lt;span class=&quot;nt&quot;&gt;--profile-name&lt;/span&gt; &amp;lt;PROFILE_NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If you do not configure an ECS profile or set environment variables, the default AWS profile will be used. The access key and secret access key values can be viewed in the &lt;a href=&quot;https://console.aws.amazon.com/iam/home?#security_credential&quot;&gt;AWS Management Console&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can further configure the ECS cluster name, launch type, and AWS region to use with the ECS CLI using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ecs-cli configure&lt;/code&gt; command. The &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;LAUNCH_TYPE&amp;gt;&lt;/code&gt; variable can be set to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;FARGATE&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;EC2&lt;/code&gt;.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ecs-cli configure &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cluster&lt;/span&gt; &amp;lt;CLUSTER_NAME&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--default-launch-type&lt;/span&gt; &amp;lt;LAUNCH_TYPE&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--config-name&lt;/span&gt; &amp;lt;CONFIG_NAME&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--region&lt;/span&gt; &amp;lt;AWS_REGION&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;These values can also be specified or overridden using command flags in the subsequent steps.&lt;/p&gt;

&lt;h2 id=&quot;creating-an-amazon-ecs-cluster&quot;&gt;Creating an Amazon ECS Cluster&lt;/h2&gt;

&lt;p&gt;Create an Amazon ECS cluster using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ecs-cli up&lt;/code&gt; command, specifying the cluster name, AWS region (e.g., &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;eu-central-1&lt;/code&gt;), and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;FARGATE&lt;/code&gt; as the launch type:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ecs-cli up &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cluster&lt;/span&gt; &amp;lt;CLUSTER_NAME&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--region&lt;/span&gt; eu-central-1 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--launch-type&lt;/span&gt; FARGATE
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;By using the FARGATE launch type, AWS Fargate manages the compute resources on your behalf, so you do not need to specify your own EC2 container instances. By default, the ECS CLI will also create an AWS CloudFormation stack to create a new VPC with an attached internet gateway, 2 public subnets, and a security group. You can also specify your own resources using flags in the command above.&lt;/p&gt;

&lt;h2 id=&quot;configuring-security-group&quot;&gt;Configuring Security Group&lt;/h2&gt;

&lt;p&gt;After successfully creating the ECS cluster, you should see the VPC and subnet IDs displayed in the terminal. Next, get the JSON description of the newly created security group and note the security group ID or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;GroupId&lt;/code&gt;. Replace the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;VPC_ID&amp;gt;&lt;/code&gt; variable with the ID of the newly created VPC.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;aws ec2 describe-security-groups &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--filters&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;Name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;vpc-id,Values&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&amp;lt;VPC_ID&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--region&lt;/span&gt; eu-central-1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In response, you should get an output similar to this one:&lt;/p&gt;

&lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;SecurityGroups&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Description&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;default VPC security group&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;GroupName&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;default&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;IpPermissions&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;IpProtocol&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;-1&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;IpRanges&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[],&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;Ipv6Ranges&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[],&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;PrefixListIds&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[],&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                    &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;UserIdGroupPairs&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;GroupId&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;sg-04512c8a7bff9b34e&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                            &lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;&quot;UserId&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;123456789012&quot;&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                    &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;},&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
                &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;…&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;],&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
            &lt;/span&gt;&lt;span class=&quot;err&quot;&gt;…&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
        &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Add an inbound rule to the security group to allow HTTP traffic from any IPv4 address. Replace the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;SG_ID&amp;gt;&lt;/code&gt; variable with the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;GroupId&lt;/code&gt; obtained in the previous step. This inbound rule will allow you to verify that your server is running in your task and that the image from the private registry was successfully pulled from GHCR.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;aws ec2 authorize-security-group-ingress &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--group-id&lt;/span&gt; &amp;lt;SG_ID&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--protocol&lt;/span&gt; tcp &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--port&lt;/span&gt; 8080 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cidr&lt;/span&gt; 0.0.0.0/0 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--region&lt;/span&gt; eu-central-1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;creating-an-amazon-ecs-service&quot;&gt;Creating an Amazon ECS Service&lt;/h2&gt;

&lt;p&gt;Amazon ECS service allows you to run and maintain a specified number of instances of a task definition simultaneously. The ECS CLI allows you to create a service using a Docker compose file. Create the following &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker-compose.yml&lt;/code&gt; file that defines a container that serves port 8080 for incoming traffic to the server. To reference an image stored in your private registry in GHCR, replace the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;USER_NAME&amp;gt;&lt;/code&gt; variable with your GitHub username, the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;REPO_NAME&amp;gt;&lt;/code&gt; variable with your private repository name, and the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;TAG_NAME&amp;gt;&lt;/code&gt; variable with the tag you used.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &amp;gt; docker-compose.yml
version: &quot;3&quot;
services:
    web:
        image: ghcr.io/&amp;lt;USER_NAME&amp;gt;/&amp;lt;REPO_NAME&amp;gt;:&amp;lt;TAG_NAME&amp;gt;
        ports:
            - 8080:8080
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Thus, if you try to deploy the image from the example at the beginning, the value of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;image&lt;/code&gt; key will be as follows: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ghcr.io/andygol/switch2osm-mkdocs:main&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You will also need to create the following &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ecs-params.yml&lt;/code&gt; file to specify additional parameters for your service specific to Amazon ECS. Note that the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;services&lt;/code&gt; field below corresponds to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;services&lt;/code&gt; field in the Docker Compose file above, corresponding to the name of the container to run. When the ECS CLI creates a task definition from the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker-compose.yml&lt;/code&gt; file, the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;web&lt;/code&gt; fields will be merged into the ECS container definition, including the container image it will use and the GHCR repository credentials it will need to access it. Replace the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;SECRET_ARN&amp;gt;&lt;/code&gt; variable with the ARN of the AWS Secrets Manager secret you created earlier. Replace the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;SUB_1_ID&amp;gt;&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;SUB_2_ID&amp;gt;&lt;/code&gt;, and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;SG_ID&amp;gt;&lt;/code&gt; variables with the IDs of the 2 public subnets and the security group that were created along with the ECS cluster.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &amp;gt; ecs-params.yml
version: 1
task_definition:
  task_execution_role: ecsTaskExecutionRole
  ecs_network_mode: awsvpc
  task_size:
    mem_limit: 0.5GB
    cpu_limit: 256
  services:
    web:
        repository_credentials: 
            credentials_parameter: &quot;&amp;lt;SECRET_ARN&amp;gt;&quot;
run_params:
  network_configuration:
    awsvpc_configuration:
      subnets:
        - &quot;&amp;lt;SUB_1_ID&amp;gt;&quot;
        - &quot;&amp;lt;SUB_2_ID&amp;gt;&quot;
      security_groups:
        - &quot;&amp;lt;SG_ID&amp;gt;&quot;
      assign_public_ip: ENABLED
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next, create the ECS service from your compose file using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ecs-cli compose service up&lt;/code&gt; command. This command will look for your &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker-compose.yml&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ecs-params.yml&lt;/code&gt; files in the current directory. Replace the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;CLUSTER_NAME&amp;gt;&lt;/code&gt; variable with the name of your ECS cluster and the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;PROJECT_NAME&amp;gt;&lt;/code&gt; variable with the desired name of your ECS service.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ecs-cli compose &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--project-name&lt;/span&gt; &amp;lt;PROJECT_NAME&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cluster&lt;/span&gt; &amp;lt;CLUSTER_NAME&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  service up &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--launch-type&lt;/span&gt; FARGATE
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Allow some time for your ECS service to deploy. You can now check the status of your ECS service using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ecs-cli ps&lt;/code&gt; command.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ecs-cli compose &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--project-name&lt;/span&gt; &amp;lt;PROJECT_NAME&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cluster&lt;/span&gt; &amp;lt;CLUSTER_NAME&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  service ps
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;By navigating to the IP address specified on port 8080, you will be able to view the homepage of our project, confirming that your task was able to successfully pull the container image from the GHCR registry using your credentials for authentication.&lt;/p&gt;

&lt;h2 id=&quot;cleanup&quot;&gt;Cleanup&lt;/h2&gt;

&lt;p&gt;Stop your ECS service using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ecs-cli compose service down&lt;/code&gt; command.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ecs-cli compose &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--project-name&lt;/span&gt; &amp;lt;PROJECT_NAME&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--cluster&lt;/span&gt; &amp;lt;CLUSTER_NAME&amp;gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  service down
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Delete the AWS CloudFormation stack that was created by &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ecs-cli up&lt;/code&gt; and the associated resources using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ecs-cli down&lt;/code&gt; command:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;ecs-cli down &lt;span class=&quot;nt&quot;&gt;--cluster&lt;/span&gt; &amp;lt;CLUSTER_NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
      </content>
      <author>
        <name>Andrii Holovin</name>
      </author>

      
        <category term="React"/>
        <category term="GitHub"/>
        <category term="Actions"/>
        <category term="CI/CD"/>
        <summary type="html">Everyone reaches a point where you need to deploy a container from a private repository to ECS. Amazon’s documentation isn’t the epitome of clarity, so I’ve prepared a guide for you (and primarily for myself) on what needs to be done.</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">Deploy a React App to GitHub Pages</title>
      <link href="https://blog.andygol.co.ua/en/2024/06/03/deploy-react-app-to-github-pages/" rel="alternate" type="text/html" title="Deploy a React App to GitHub Pages"/>
      <published>2024-06-03T08:30:00+00:00</published>
      <updated>2024-06-03T08:30:00+00:00</updated>
      <id>https://blog.andygol.co.ua/en/2024/06/03/deploy-react-app-to-github-pages</id>
      <content type="html" xml:base="https://blog.andygol.co.ua/en/2024/06/03/deploy-react-app-to-github-pages/">
        &lt;p&gt;Here, I will show you how to deploy a React app to GitHub Pages using GitHub Actions. GitHub Pages is a static site hosting service that takes HTML, CSS, and JavaScript files straight from a repository on GitHub and makes it accessible as a website. GitHub Actions is a CI/CD service that allows you to automate your workflow on publishing your static site to GitHub Pages.&lt;/p&gt;

&lt;h2 id=&quot;create-a-react-app&quot;&gt;Create a React App&lt;/h2&gt;

&lt;p&gt;First, create a new React app using Create React App.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;npx create-react-app react-app &lt;span class=&quot;nt&quot;&gt;--template&lt;/span&gt; typescript
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;react-app
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Your git repository should be initialized automatically.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024/06/npx-boilerplating-1.png&quot; alt=&quot;Create React App 1&quot; /&gt;
&lt;img src=&quot;/images/2024/06/npx-boilerplating-2.png&quot; alt=&quot;Create React App 2&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;create-a-github-repository&quot;&gt;Create a GitHub Repository&lt;/h2&gt;

&lt;p&gt;Working in the folder with the newly created app, create a new repository on GitHub.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;gh repo create
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024/06/gh-repo-create.png&quot; alt=&quot;Create GitHub Repository&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;adjust-repository-settings&quot;&gt;Adjust repository settings&lt;/h2&gt;

&lt;p&gt;Go to the repository settings and enable GitHub Pages. Choose the source for &lt;strong&gt;Build and deployment&lt;/strong&gt; — GitHub Actions.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024/06/gh-repo-pages-source.png&quot; alt=&quot;GitHub Pages Settings&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Open the &lt;strong&gt;Actions&lt;/strong&gt;/&lt;strong&gt;General&lt;/strong&gt; menu, scroll down to the &lt;strong&gt;Workflow permissions&lt;/strong&gt; section, and give Read and Write permissions to the workflow.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024/06/gh-repo-actions.png&quot; alt=&quot;GitHub Actions Permissions&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;add-github-actions-workflow&quot;&gt;Add GitHub Actions Workflow&lt;/h2&gt;

&lt;p&gt;Create a new directory called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.github/workflows&lt;/code&gt; in the root of your repository.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; .github/workflows
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Create a new file called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;deploy.yml&lt;/code&gt; in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.github/workflows&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;Name your workflow.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Deploy React App&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This will allow you to distinguish this workflow from others.&lt;/p&gt;

&lt;p&gt;Add the following content to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;deploy.yml&lt;/code&gt; file.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;…&lt;/span&gt;

&lt;span class=&quot;na&quot;&gt;on&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;push&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;branches&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;main&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;pull_request&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;types&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
            &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;closed&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;branches&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;main&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;workflow_dispatch&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;…&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This section describes the events that will trigger the workflow. In this case, the workflow will be triggered on push to the main branch, when a pull request is closed, and when the workflow is manually triggered.&lt;/p&gt;

&lt;p&gt;Next, add jobs description to the workflow.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;…&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;jobs&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;build&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;runs-on&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ubuntu-latest&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;…&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Our job with the name &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;build&lt;/code&gt; will run on the latest version of Ubuntu.&lt;/p&gt;

&lt;p&gt;Add steps to the job.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;…&lt;/span&gt;
    &lt;span class=&quot;s&quot;&gt;steps&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Checkout code&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;uses&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Setup Node.js&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;uses&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;actions/setup-node@v4&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;with&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;node-version&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;20&apos;&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;…&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The first step checks out the code from the repository. The second step sets up Node.js.&lt;/p&gt;

&lt;p&gt;Add more steps to the job.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;…&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;- name&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Install dependencies&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;run&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;npm ci&lt;/span&gt;

      &lt;span class=&quot;s&quot;&gt;- name&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Build&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;run&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;npm run build&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;…&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The third step installs dependencies, and the fourth step builds the React app.&lt;/p&gt;

&lt;p&gt;Add the final steps to the job.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;…&lt;/span&gt;
      &lt;span class=&quot;s&quot;&gt;- name&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Upload artifact&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;uses&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;actions/upload-pages-artifact@v3&lt;/span&gt;
        &lt;span class=&quot;s&quot;&gt;with&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;github-pages&apos;&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;build&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;…&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Here we use the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;upload-pages-artifact&lt;/code&gt; action to upload the build directory as an artifact for further deployment.&lt;/p&gt;

&lt;p&gt;Create deployment steps.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;…&lt;/span&gt;
  &lt;span class=&quot;s&quot;&gt;deploy&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;if&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;github.event.pull_request.merged == &lt;/span&gt;&lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;s&quot;&gt; || github.event_name == &apos;push&apos; || github.event_name == &apos;workflow_dispatch&apos;&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;needs&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;build&lt;/span&gt;

    &lt;span class=&quot;na&quot;&gt;permissions&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;pages&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;write&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;id-token&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;write&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;contents&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;read&lt;/span&gt;

    &lt;span class=&quot;na&quot;&gt;environment&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;github-pages&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;url&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;url&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;${{ steps.deployment.outputs.page_url }}&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;…&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The deployment job will run only if the pull request is merged or the workflow is triggered by a push event or fired manually. It needs the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;build&lt;/code&gt; job to be completed. The deployment job has permissions to write to the pages, write the id-token, and read the contents. The environment name is &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;github-pages&lt;/code&gt;, and the URL is the output of the deployment step.&lt;/p&gt;

&lt;p&gt;Add steps to the deployment job.&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;s&quot;&gt;…&lt;/span&gt;
    &lt;span class=&quot;s&quot;&gt;runs-on&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class=&quot;s&quot;&gt;steps&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Setup Pages&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;uses&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;actions/configure-pages@v5&lt;/span&gt;

      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Deploy to GitHub Pages&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;id&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;deployment&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;uses&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;actions/deploy-pages@v4&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;with&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;token&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;${{ secrets.GITHUB_TOKEN }}&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;artifact_name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;github-pages&apos;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The first step sets up the Pages. The second step deploys the build directory to GitHub Pages using the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;deploy-pages&lt;/code&gt; action. We use the previously built artifact and the GitHub token for authentication of the deployment.&lt;/p&gt;

&lt;p&gt;Commit the changes and push them to the repository.&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git add &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
git commit &lt;span class=&quot;nt&quot;&gt;-m&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Add GitHub Actions workflow&quot;&lt;/span&gt;
git push
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;adjust-packagejson&quot;&gt;Adjust &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;package.json&lt;/code&gt;&lt;/h2&gt;

&lt;p&gt;Open the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;package.json&lt;/code&gt; file and add homepage property. You can set it to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.&lt;/code&gt; or provide the URL of your GitHub Pages (&quot;homepage&quot;: &quot;&lt;a href=&quot;https://andygol.github.io/react-app/&quot;&gt;https://andygol.github.io/react-app/&lt;/a&gt;&quot;).&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;err&quot;&gt;{&lt;/span&gt;
  &quot;name&quot;: &quot;react-app&quot;,
  &quot;version&quot;: &quot;0.1.0&quot;,
  &quot;private&quot;: true,
&lt;span class=&quot;gi&quot;&gt;+  &quot;homepage&quot;: &quot;.&quot;,
&lt;/span&gt;  &quot;dependencies&quot;: {
    …
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;deploy-the-react-app&quot;&gt;Deploy the React App&lt;/h2&gt;

&lt;p&gt;Any edits you make to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;main&lt;/code&gt; branch will trigger the GitHub Actions workflow. The workflow will build the React app and deploy it to GitHub Pages.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2024/06/deployed-react-app.png&quot; alt=&quot;Deployed React App&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://andygol.co.ua/react-app/&quot;&gt;https://andygol.co.ua/react-app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s it! You have successfully deployed a React app to GitHub Pages using GitHub Actions.&lt;/p&gt;

&lt;p&gt;PS. If you want to see the full workflow file, check out the &lt;a href=&quot;https://github.com/Andygol/react-app/blob/main/.github/workflows/deploy.yml&quot;&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;
      </content>
      <author>
        <name>Andrii Holovin</name>
      </author>

      
        <category term="React"/>
        <category term="GitHub"/>
        <category term="Actions"/>
        <category term="CI/CD"/>
        <summary type="html">Here, I will show you how to deploy a React app to GitHub Pages using GitHub Actions. GitHub Pages is a static site hosting service that takes HTML, CSS, and JavaScript files straight from a repository on GitHub and makes it accessible as a website. GitHub Actions is a CI/CD service that allows you to automate your workflow on publishing your static site to GitHub Pages.</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">OpenStreetMap ain’t Google Maps</title>
      <link href="https://blog.andygol.co.ua/en/2023/07/22/openstreetmap-aint-google-maps/" rel="alternate" type="text/html" title="OpenStreetMap ain&apos;t Google Maps"/>
      <published>2023-07-22T08:30:00+00:00</published>
      <updated>2023-07-22T08:30:00+00:00</updated>
      <id>https://blog.andygol.co.ua/en/2023/07/22/openstreetmap-aint-google-maps</id>
      <content type="html" xml:base="https://blog.andygol.co.ua/en/2023/07/22/openstreetmap-aint-google-maps/">
        &lt;blockquote&gt;
  &lt;p&gt;&lt;em&gt;There should be a screenshot from Toy Story where Buzz tries to explain to Woody something about the stars by showing his hand up.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;About 20+ years ago, I came across a short fiction story on a computer-related theme with elements of a thriller, in which there was a moment when the main character hastily created some geoservices, applications that use them, almost from nothing, to get out of situations in which he found himself. At that time, I did not fully understand the whole idea, because I had never encountered such a thing in my life. And here we are in the new century/millennium, we have a device in our pocket that uses these mysterious geoservices — a smartphone.&lt;/p&gt;

&lt;p&gt;Let’s try to figure out these mysterious geoservices. In fact, they existed long before the appearance of smartphones, and even before the appearance of computers in general. The simplest example is postal addresses. Knowing the number of the house, the name of the street, you can get to the destination. You can plan (think over) the route on your own by drawing an approximate diagram on a piece of paper, or by making a similar one in your head. In fact, we do this every time we leave the house – we plan our route and then stick to that plan, even when we go to the nearest store to buy groceries. If you are going to an unfamiliar, remote place about which you either have little information, or you have never been there, you ask your friends how to get there, what type of transport to use, which stop to get off at, which direction to go next and where to turn. You can take a taxi and the driver will take you to the specified address or landmarks. If you need to send a parcel to another city, the postal service will provide you with such a service, but you still need to specify the recipient and the place where the parcel should be delivered. As you can see, we need information about the location of some object in the area (in space), that is, geospatial information. This information is the basis of all geoservices.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;… then there will be a beet field, then a forest strip, then a corn field, then a wheat field, a hemp field, behind it a talking river — it will tell …&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Geoservices are information and technological solutions that provide collection, processing, storage and provision of geographic (geospatial) information. They help to understand and use spatial data for various purposes such as navigation, data analysis, planning, marketing and many more. With the help of geoservices, users can interact with maps, exact coordinates, geographic objects and other geodata.&lt;/p&gt;

&lt;p&gt;And now it is almost impossible to imagine calling a taxi, ordering a pizza delivery, and calculating the potential reach of the target audience without the use of geospatial data and services.&lt;/p&gt;

&lt;p&gt;I have repeatedly seen people confuse data with services. This is exactly the case mentioned in the title. I have seen several different analyses where the authors tried to compare Google Maps exactly with OpenStreetMap without having any idea of what they were comparing. Let’s figure it out.&lt;/p&gt;

&lt;h2 id=&quot;lets-talk-about-google-maps-first&quot;&gt;Let’s talk about Google Maps first&lt;/h2&gt;

&lt;p&gt;The history of Google Maps began in the mid-2000s and is an interesting example of a successful combination of innovative technology and practical application.&lt;/p&gt;

&lt;p&gt;Google Maps was launched on February 8, 2005. The maps were created as a project of the Google team, which sought to create a map web service with high map loading speed and interactive capabilities.&lt;/p&gt;

&lt;p&gt;The first version of Google Maps provided the ability to view maps, search for places, plan routes, and navigate using GPS. The interface was as intuitive and user-friendly as possible.&lt;/p&gt;

&lt;p&gt;As early as June 2005, Google introduced the Google Maps API, which allowed developers to integrate mapping functions into their websites and applications. This opened wide opportunities for the use of cartography in other projects.&lt;/p&gt;

&lt;p&gt;In 2007, Google Maps was supplemented with a new functionality — Street View, which allowed users to view photos along roads and streets, which gave them an idea of ​​a realistic view of the area, there was an opportunity to virtually visit other places in which you have never been.&lt;/p&gt;

&lt;p&gt;In 2008, a mobile version of Google Maps for smartphones was launched, allowing users to access maps and navigation directly on their mobile devices.&lt;/p&gt;

&lt;p&gt;In 2010, Google integrated the functionality of Google Maps with another product — Google Earth, allowing users to view 3D models and images from satellites.&lt;/p&gt;

&lt;p&gt;Google Maps continued to evolve, adding new features such as turn-by-turn navigation, offline mode, place ratings and reviews, event and establishment recommendations, and more.&lt;/p&gt;

&lt;p&gt;Today, from the point of view of the average user, Google Maps has become one of the most popular tools for navigating and exploring the world. They are used by millions of people every day to find directions, find places, view photos and explore new remote locations. Google Maps has also become an important resource for businesses, researchers and public organizations that use its services to solve various tasks.&lt;/p&gt;

&lt;h2 id=&quot;openstreetmap&quot;&gt;OpenStreetMap&lt;/h2&gt;

&lt;p&gt;The history of OpenStreetMap (OSM) begins in 2004 and is connected with the desire to create a free, open and accessible database of geospatial information.&lt;/p&gt;

&lt;p&gt;OpenStreetMap was founded in Great Britain by Steve Coast in August 2004. He had the idea to create a global mapping platform where users could collaboratively create and edit geodata.&lt;/p&gt;

&lt;p&gt;The main reason for the emergence of OpenStreetMap was the lack of available and up-to-date geographic information in some regions of the world. Commercial global mapping services such as Google Maps (which did not exist at the time) did not always provide coverage of sufficient quality or detail for certain areas.&lt;/p&gt;

&lt;p&gt;OpenStreetMap put forward the principle of open data and access to cartographic information. The project became a platform for a global community of volunteers who could join the process of creating and editing geodata of any region of the planet.&lt;/p&gt;

&lt;p&gt;Over time, OpenStreetMap attracted the attention of more and more users and built an active community. Volunteers began to actively enter data on roads, localities, information on hydrography, enterprises, institutions and other geographical objects.&lt;/p&gt;

&lt;p&gt;OpenStreetMap’s development-oriented community has become one of the largest and most active free mapping communities in the world. Through open and free data access, openness and collective efforts, OSM has become a source of valuable and relevant geospatial information used in a variety of industries, including tourism, research, humanitarian aid and urban planning.&lt;/p&gt;

&lt;h2 id=&quot;goals-and-philosophy&quot;&gt;Goals and philosophy&lt;/h2&gt;

&lt;p&gt;Google Maps is a commercial product created by Google. The main goal of Google Maps is to provide users with convenient and fast interactive maps for navigation, finding places and orientation. Google makes money from it by displaying ads and providing paid services.&lt;/p&gt;

&lt;p&gt;OpenStreetMap, on the other hand, is a project based on volunteer principles and an open data philosophy. The main goal of OSM is to create a free and publicly available geospatial database for the entire world. Everyone can contribute by adding new data or correcting issues. This approach makes it possible to create detailed and up-to-date maps where other sources may be limited or outdated.&lt;/p&gt;

&lt;h2 id=&quot;data-sources&quot;&gt;Data sources&lt;/h2&gt;

&lt;p&gt;Google Maps uses commercial and proprietary data sources, licensed data from third-party companies, and machine learning algorithms to process large volumes of data accumulated by the company along with on-site data collection using specialized equipment and customer device tracking. This gives them an advantage in speed and coverage, but hides details and sources of information. (Thus, all of us moving around the area unknowingly help Google sell us services using data received from our smartphones in the form of automated reports from applications or the OS.)&lt;/p&gt;

&lt;p&gt;OpenStreetMap uses open data sources, such as satellite images provided by suppliers for the purposes of the project, open geospatial data distributed freely by national governments, data from other free open sources, as well as own contributions of project participants. This allows for more transparent and verifiable data that can be useful to the public, researchers and even humanitarian organizations.&lt;/p&gt;

&lt;h2 id=&quot;data-relevance&quot;&gt;Data relevance&lt;/h2&gt;

&lt;p&gt;Google Maps has access to resources that allow updating their database with a certain periodicity, especially where there is an increased demand for them and more money can be made from the services. This helps provide users with up-to-date information such as traffic jams, entertainment and new establishments. (Remember your smartphones that Google collects data on all of this?)&lt;/p&gt;

&lt;p&gt;OpenStreetMap, depending on the region, may have a different level of data relevance. Here everything depends on the number and activity of volunteers who make changes to the database. However, where there is an active community of project participants, the data can be quite relevant and detailed.&lt;/p&gt;

&lt;p&gt;You can notice that both Google Maps and OpenStreetMap have similar elements on the main page:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Map&lt;/li&gt;
  &lt;li&gt;Search&lt;/li&gt;
  &lt;li&gt;Ability to build a route&lt;/li&gt;
  &lt;li&gt;View information about the object on the map…&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;so-whats-the-difference-then&quot;&gt;So what’s the difference then?&lt;/h2&gt;

&lt;p&gt;The main difference is that Google Maps is a commercial product whose purpose is to provide services to customers. OpenStreetMap is primarily focused on data collection and distribution.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;– And what about the API? - you can say.&lt;br /&gt;
– This is not a single API, but a collection of various APIs, each of which is responsible for providing one or another service.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;API for displaying maps; API for search (geocoding); API for routing; APIs for routing and providing turn-by-turn instructions and everything else are all these components that make up the final product.&lt;/p&gt;

&lt;p&gt;Google’s advantage is centralization, all APIs are developed and maintained in-house. Using them, you expect to receive a certain volume of services with a certain level of availability and quality (SLA), for which you are ready to pay a certain amount, without the need to deploy the entire infrastructure at your own or at the client’s. Calculating how much it will cost you to use the services is quite non-trivial, because sometimes it is quite difficult to understand how some services depend on others and to predict the expected level of costs.&lt;/p&gt;

&lt;p&gt;Instead, the OpenStreetMap API is intended for receiving, editing and saving data in a common geospatial database by project participants. That is, the only thing that OpenStreetMap guarantees you is that you can use the data and already using this data you can create your own map, geocoder, offer your customers navigation services or something else. The map, search, routing on the Main page of the OpenStreetMap site is a demonstration of what can be done with project data, all these elements are separate independent projects in themselves and the only thing that connects them to OpenStreetMap is the data that OpenStreetMap provides to everyone.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Map — tile server (&lt;a href=&quot;https://mapnik.org/&quot;&gt;Mapnik&lt;/a&gt;) and map style (&lt;a href=&quot;https://wiki.openstreetmap.org/wiki/Uk:Standard_layer&quot;&gt;OSM-Carto&lt;/a&gt;) — 2 separate projects, which are in no way dependent on OSMF (OpenStreetMap Foundation)&lt;/li&gt;
  &lt;li&gt;Tile Display Library — &lt;a href=&quot;https://leafletjs.com&quot;&gt;leaflet.js&lt;/a&gt;, an open source JavaScript library for displaying interactive maps&lt;/li&gt;
  &lt;li&gt;Search — geocoder &lt;a href=&quot;https://nominatim.org/&quot;&gt;Nominatim&lt;/a&gt;, also an independent project&lt;/li&gt;
  &lt;li&gt;Routing — &lt;a href=&quot;https://project-osrm.org/&quot;&gt;OSRM&lt;/a&gt;, &lt;a href=&quot;https://valhalla.github.io/valhalla/&quot;&gt;Valhalla&lt;/a&gt;, &lt;a href=&quot;https://www.graphhopper.com /&quot;&gt;GraphHopper&lt;/a&gt;, also third-party projects&lt;/li&gt;
  &lt;li&gt;Data export/mining — &lt;a href=&quot;http://overpass-api.de/&quot;&gt;Overpass API&lt;/a&gt; (&lt;a href=&quot;https://overpass-turbo.eu/&quot;&gt;Overpass-Turbo&lt;/a&gt;), ditto…&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://ideditor.com/&quot;&gt;iD&lt;/a&gt; is an OSM data editor (built into Main), now managed by OSMF.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All these are free projects with open source code and you, if necessary, if possible, if you have the appropriate qualifications, you can assemble the solution you need from these building blocks. Especially since you have a choice among various data rendering projects, map creation; various geocoding and search engines; routing and navigation engines. In addition, you can use OpenStreetMap data for analysis in your own projects, combining it with your other data, which is something you cannot do with Google Maps.&lt;/p&gt;

&lt;h2 id=&quot;using-openstreetmap&quot;&gt;Using OpenStreetMap&lt;/h2&gt;

&lt;p&gt;OpenStreetMap is used by various companies and organizations, including Meta, Apple, Amazon, TomTom and others.&lt;/p&gt;

&lt;p&gt;Meta uses OpenStreetMap data for its mapping and location integration features. To provide its users with up-to-date information about places and locations; Meta uses OSM data to show users’ locations, maps and integrate with other features on its platforms.&lt;/p&gt;

&lt;p&gt;Apple also uses OpenStreetMap data in some of its services. For example, in some countries or regions where data from third-party providers for Apple Maps may be less complete or out of date, or conversely where OpenStreetMap data is of higher quality, they may use it to support navigation and cartography.&lt;/p&gt;

&lt;p&gt;Amazon also uses OpenStreetMap data in its services such as AWS (Amazon Web Services). OSM can be used in a variety of geographic data processing, analysis, and visualization solutions on the AWS platform.&lt;/p&gt;

&lt;p&gt;Among other manufacturers of navigation equipment, TomTom also cooperates with OpenStreetMap. They use OSM data to provide navigation and mapping services in their devices and applications.&lt;/p&gt;

&lt;p&gt;Airbnb, a popular accommodation booking platform, also uses OSM data to display locations and accommodations on its maps. They can use OSM to provide accurate information about the location of residences and their surroundings.&lt;/p&gt;

&lt;p&gt;ESRI ArcGIS Online allows users to integrate OpenStreetMap data into their geographic information systems. Users can import OSM data into their projects and use it for analysis, mapping, and visualization.&lt;/p&gt;

&lt;p&gt;These examples show a wide range of uses of OpenStreetMap by well-known companies and services. The openness and availability of OSM data allow different platforms and organizations to use geographic information to provide quality services and develop their own products.&lt;/p&gt;

&lt;p&gt;It should be noted that the choice of what to use, Google services, or OpenStreetMap data and ecosystem, depends on specific requirements, the completeness and relevance of data coverage in a particular area, the need for quickly received changes in data, the possibility of implementing data quality control processes and availability you have experience working with Google products or the OpenStreetMap ecosystem and data. The choice is yours.&lt;/p&gt;
      </content>
      <author>
        <name>Andrii Holovin</name>
      </author>

      
        <category term="OpenStreetMap"/>
        <category term="Google Maps"/>
        <category term="OSM"/>
        <category term="API"/>
        <summary type="html">There should be a screenshot from Toy Story where Buzz tries to explain to Woody something about the stars by showing his hand up.</summary>
      
    </entry>
  
    <entry xml:lang="en">
      <title type="html">OSM 2.0 API using git</title>
      <link href="https://blog.andygol.co.ua/en/2023/05/07/osm-2-0-api-using-git/" rel="alternate" type="text/html" title="OSM 2.0 API using git"/>
      <published>2023-05-07T08:29:00+00:00</published>
      <updated>2023-05-07T08:29:00+00:00</updated>
      <id>https://blog.andygol.co.ua/en/2023/05/07/osm-2-0-api-using-git</id>
      <content type="html" xml:base="https://blog.andygol.co.ua/en/2023/05/07/osm-2-0-api-using-git/">
        &lt;p&gt;The OSM 2.0 API (OpenStreetMap) is an application programming interface that provides access to geospatial data and its change history stored with git. The basic data structure in OSM should be based on dividing the globe into tiles, each of which is a separate git repository.&lt;/p&gt;

&lt;p&gt;The OSM object data storage format is yaml (YAML Ain’t Markup Language). Yaml is an easy-to-read and easy-to-save data format based on the use of indentation to represent data structure. Using yaml in OSM allows storing various multi-level attributes in the form of tags for geographic objects.&lt;/p&gt;

&lt;p&gt;Each object will be stored as a separate YAML file, unlike previous versions of the OSM API where data was stored and exchanged as XML files.&lt;/p&gt;

&lt;p&gt;Each tile in OSM is an independent git repository. Git is a version control system that allows you to track the history of changes in a file system and store different versions of files. Using git for OSM allows you to track all changes made to geospatial data and store their history.&lt;/p&gt;

&lt;h2 id=&quot;description&quot;&gt;Description&lt;/h2&gt;

&lt;p&gt;The OSM API should provide various methods for accessing and modifying data. Some of these should include:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Obtaining data by coordinates or area.&lt;/li&gt;
  &lt;li&gt;Search for objects by type, attributes or tags.&lt;/li&gt;
  &lt;li&gt;Obtaining the history of changes for a specific object.&lt;/li&gt;
  &lt;li&gt;Addition of new objects or changes to existing ones.&lt;/li&gt;
  &lt;li&gt;Deleting objects.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The OSM API should provide the ability to perform queries based on various protocols to conveniently retrieve and interact with geospatial data.&lt;/p&gt;

&lt;p&gt;Overall, the OSM API, which uses git to store geospatial data and its change history, provides a powerful mechanism for collaborating and managing geospatial data. Key features of the OSM API using git include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Version control&lt;/strong&gt;:
By using git, the OSM API allows you to store a change history for each object. This means that you can track who made changes to a specific object, when, and what changes, as well as restore previous versions of the data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Distributed data structure:&lt;/strong&gt;
The globe is broken down into tiles, which are self-contained git repositories. This allows you to effectively manage and distribute the load on different servers. Each tile contains only the objects within its boundaries, which makes it easier to access specific data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data format flexibility:&lt;/strong&gt;
Using the yaml format to store object data allows you to store various attributes and tags that help describe geographic objects in detail. This allows you to add your own tags and attributes for more flexible data presentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability:&lt;/strong&gt;
The distributed data structure and the use of git make it easy to scale the system to handle large volumes of geospatial data. You can add new tiles or servers to expand the infrastructure and improve performance, break large tiles into smaller ones over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Convenient access to data:&lt;/strong&gt;
The OSM API provides a variety of methods for accessing OSM data. You can retrieve data by coordinates or regions, search for objects by type, attributes, or OSM tags, and retrieve change history for a specific object. This provides flexibility and the ability to select only the data you need.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Possibility of joint work:&lt;/strong&gt;
Using git allows multiple users to work with data simultaneously and make changes. Thanks to version control and the ability to merge branches, you can easily work together on projects, make and merge changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration with other tools:&lt;/strong&gt;
Because OSM uses git, you can use a wide variety of tools that support git to work with geospatial data. These can be various version control systems, code editors, data analysis tools, and others.&lt;/p&gt;

&lt;p&gt;In addition to these capabilities, the OSM API using git allows you to:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data extension:&lt;/strong&gt;
You can add your own data to OSM data, extending the current geospatial database. It allows you to create your own datasets, add attributes, and extend the functionality of OSM.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration with other services:&lt;/strong&gt;
The OSM API can be easily integrated with other services and tools, such as geospatial services or data analysis systems. This allows you to combine different data sources and use them to create the right solutions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advanced editing options:&lt;/strong&gt;
The OSM API allows you to make changes to geospatial data, including adding new features, editing attributes, and deleting features. This makes it possible to actively cooperate with the global OSM database and supplement it with knowledge about specific objects or regions.&lt;/p&gt;

&lt;p&gt;These are just a few additional features of the OSM API using git. Overall, the OSM API should be a powerful tool for working with geospatial data, providing version control, data format flexibility, and scalability.&lt;/p&gt;

&lt;h2 id=&quot;yaml-is-for-saving-data&quot;&gt;YAML is for saving data&lt;/h2&gt;

&lt;p&gt;OSM API version 2.0 uses the yaml format. Each object is stored as a separate YAML file, and this differs from previous versions of the OSM API, where data was stored as XML files. Using the YAML format to store individual object files has several advantages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ease of reading and editing:&lt;/strong&gt;
The YAML format has a clear, easy-to-read syntax that makes it easy to manually edit or view data. Using indentation instead of XML syntax elements makes it easier to work with data and edit it&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flexibility in data expressiveness:&lt;/strong&gt;
YAML allows you to store additional properties and information about objects that are necessary for a particular application. It can include simple values, lists, associative arrays, and nested data structures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data search and filtering:&lt;/strong&gt;
Being individual files, objects can be easily found and filtered by various parameters. You can use YAML libraries or tools to search and manipulate data as needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Change history support:&lt;/strong&gt;
Each object is presented in the form of its own file, which allows you to save the history of changes and track the development of data. Git, as a version control system, can be used to manage changes and branches, which facilitates effective collaboration and data version control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data expansion and validation:&lt;/strong&gt;
You can use YAML schemas to validate and control the correctness of data. This ensures that the data conforms to the given rules and structure. Also, you can extend YAML schemas to define additional properties and constraints for stored objects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration with other tools:&lt;/strong&gt;
YAML is a popular data exchange format supported by many tools and platforms. You can use various tools to import and export data, and to process and analyze geospatial data in YAML format.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/2023/05/2023-05-07-osm-2-0-api-using-git.png&quot; alt=&quot;null island yaml&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;git-repository-as-a-data-store&quot;&gt;Git repository as a data store&lt;/h2&gt;

&lt;p&gt;The shape (geometry) of the tiles of each repository can be arbitrary.&lt;/p&gt;

&lt;p&gt;In the OSM API using git, the shape (geometry) of the tiles of each repository can be arbitrary. Tiles can be of different shapes and sizes, depending on your geospatial data needs.&lt;/p&gt;

&lt;p&gt;A description of tile boundaries, or other metadata that pertains to each tile, can be stored in the tile repository itself. This allows important information to be stored alongside geospatial data, helping to ensure consistency and availability of that data.&lt;/p&gt;

&lt;p&gt;In addition, there is a master repository that includes tile repositories in the form of git submodules. The master repository serves as the starting point for managing and coordinating tile repositories. It may contain additional information about the tiles, such as indexes or links to each tile.&lt;/p&gt;

&lt;p&gt;This architecture allows geospatial data to be organized and managed by storing the description of tile boundaries in the tile repository itself and using a master repository to coordinate and access these tile repositories.&lt;/p&gt;

&lt;p&gt;This approach provides flexibility and scalability when working with geospatial data, allowing easy management of individual tiles, as well as maintaining change history and version control using git.&lt;/p&gt;

&lt;p&gt;Data segmentation into tile repositories allows you to implement:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Support for distributed cooperation:&lt;/strong&gt;
By using git, the OSM API enables distributed collaboration. Different users can work on different tiles, make changes and interact with the OSM database using the same principles as for working with code, and later merge their changes using git.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Synchronize and merge changes:&lt;/strong&gt;
By using git, the OSM API makes it easy to synchronize and merge changes made by different users into their own copies of repositories. This avoids conflicts and ensures data integrity when merging changes. In addition, users can make their own forks to add their (mostly non-public) data on top of OSM data. Over time, when the decision is made to push your own closed data into the public domain, this can be done by using git push. Changes can be pushed both upstream, from local copies to shared repositories, and downstream, from shared to local repositories, allowing users to keep their own repositories up-to-date while receiving updates from the community, with less effort to synchronize data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;History of changes:&lt;/strong&gt;
Thanks to the use of git, the OSM API stores the history of object changes in each tile, which allows you to follow the development and changes of geospatial data. This is a useful feature for analyzing data, tracking changes, and restoring previous versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Version support:&lt;/strong&gt;
Git allows you to store each version of an object (file) in a separate commit, which makes it possible to easily review, restore, and track data changes over time. You can view the change history and go back to previous versions if needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Easy replication and distribution:&lt;/strong&gt;
Because each object is stored in its own file, they can be easily replicated and distributed across servers or tiles. This allows for improved scalability and data access. Git provides the ability to synchronize and replicate data between different servers or tiles. You can use git features, such as branching and merging, to manage changes and updates to your data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conflict management:&lt;/strong&gt;
In the case of simultaneous changes to the same object, git provides a means to resolve conflicting changes and merge different versions. This allows multiple users to work with the same data and handle conflicts efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data recovery:&lt;/strong&gt;
Thanks to git version control, the OSM API allows you to restore data to previous versions or restore deleted objects. This ensures data safety and security.&lt;/p&gt;

&lt;h2 id=&quot;osm-changeset--git-commit&quot;&gt;OSM changeset – git commit&lt;/h2&gt;

&lt;p&gt;So, in the context of saving OSM data in git, a changeset is represented as a commit in git. A commit is a fixed data structure that captures the changes made at a specific point in time.&lt;/p&gt;

&lt;p&gt;Each commit in git contains information about the changes made to the repository. In the case of OSM, a commit can contain changes related to the addition, deletion, or modification of geospatial objects.&lt;/p&gt;

&lt;p&gt;Git commits provide a history of changes and allow you to track the development of data over time. Each commit has a unique identifier (hash), which serves to identify them and restore a certain version of the data.&lt;/p&gt;

&lt;p&gt;One commit can contain changes for one or more geospatial objects. This allows you to record changes in individual objects and manage the history of changes independently of other objects.&lt;/p&gt;

&lt;p&gt;Commits can be grouped into branches and merged with each other, which allows you to manage parallel branches and merge their changes into one common version of the data.&lt;/p&gt;

&lt;p&gt;Hence, commits in git are used to save and manage changes to OSM geospatial data in the git version of data storage.&lt;/p&gt;

&lt;h2 id=&quot;advantages-of-using-git&quot;&gt;Advantages of using git&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;git commands:&lt;/strong&gt;
You can use standard git commands to manage tile repositories, such as &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git clone&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git pull&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git push&lt;/code&gt; and others. It allows you to work with the geospatial data of the OSM API using the familiar and powerful git interface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advanced git features:&lt;/strong&gt;
The OSM API can use additional features and capabilities of git, such as branches, git-tags, and commits. It allows you to organize and manage different versions, create milestones and work with different branches.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access control:&lt;/strong&gt;
The OSM API can provide the ability to restrict access to individual tiles or data using git’s access control features. This allows you to control who has the right to read, write or do other things with geospatial data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backup and Restore:&lt;/strong&gt;
Thanks to git, the OSM API allows you to back up your data and restore it when needed. This provides an additional layer of protection and security for your geospatial data without having to develop your own solutions for it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Distribution of changes:&lt;/strong&gt;
You can easily propagate changes made to tile repositories to other OSM API instances or installations using familiar git methods. No need to waste server resources creating data dumps and data for replication. With the use of git, this is done by fetching updated data as needed with the git pull command. It allows collaboration with different communities or distributed systems, sharing and synchronizing geospatial data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration with other tools:&lt;/strong&gt;
The OSM API using git can be easily integrated with other development tools, such as project management systems, automation tools, cloud storage services, and others. This makes it an important component when developing geospatial applications and working with data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Synchronization with other data sources:&lt;/strong&gt;
Using the OSM and git APIs, you can synchronize geospatial data with other data sources. This can be useful if you have additional data sources or want to combine data from different sources to create a more complete and comprehensive database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Extensions and customizations:&lt;/strong&gt;
The OSM API with git allows you to extend and customize its functionality according to your own needs. You can add additional modules, functions, and extensions to work with geodata and interact with the API.&lt;/p&gt;

&lt;p&gt;Using the OSM API with git provides powerful capabilities for managing geospatial data, maintaining its change history, and collaborating with the community.&lt;/p&gt;

&lt;h3 id=&quot;summary-of-osm-api-capabilities-using-git&quot;&gt;Summary of OSM API capabilities using git&lt;/h3&gt;

&lt;p&gt;The OSM API, which uses git to store geospatial data and change history, has the following features:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Each tile is an independent git repository containing a separate set of geospatial data.&lt;/li&gt;
  &lt;li&gt;Tiles can have a different shape (geometry) and the description of their boundaries is stored in the tile repository itself and the master repository.&lt;/li&gt;
  &lt;li&gt;Data about individual objects is stored in the YAML format, which allows you to store additional properties and information about them, imitating all the flexibility of the OSM tagging system.&lt;/li&gt;
  &lt;li&gt;The API supports distributed collaboration, allowing different users to work on different tiles and merge changes using git.&lt;/li&gt;
  &lt;li&gt;Changes are saved in the change history of git repositories, which allows you to follow the development of geospatial data.&lt;/li&gt;
  &lt;li&gt;Standard git commands like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;clone&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;pull&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;push&lt;/code&gt; can be used to manage tile repositories.&lt;/li&gt;
  &lt;li&gt;The API provides advanced git features such as branches, git tags, and commits for version control and development.&lt;/li&gt;
  &lt;li&gt;There is an option to restrict access to tiles and data using git access control.&lt;/li&gt;
  &lt;li&gt;The API allows you to restore data to previous versions and perform data backup and recovery.&lt;/li&gt;
  &lt;li&gt;Changes can be propagated to other instances of the OSM API and collaborated with the OpenStreetMap community and other developers.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;distribution-of-large-objects-between-tiles&quot;&gt;Distribution of large objects between tiles&lt;/h2&gt;

&lt;p&gt;The distribution of large objects between tiles is an important aspect when organizing geospatial data in the OSM API using git. This allows you to effectively manage large volumes of data and ensure optimal performance and speed of access to this data.&lt;/p&gt;

&lt;p&gt;There are several approaches to distributing large objects between tiles in the OSM API. Here are some of them:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Geographic distribution:&lt;/strong&gt;
Objects can be distributed between tiles using their geographic location. For example, large regional objects can be divided into separate tiles covering the corresponding territory. This allows data to be stored closer to its physical location and reduces the load on individual tiles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Distribution by categories:&lt;/strong&gt;
Objects can be distributed between tiles depending on their categories or types. For example, road or hydrographic features can be stored in separate tiles, making it easier to access specific categories of data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependence on size:&lt;/strong&gt;
Objects can be divided into tiles depending on their size or complexity. For example, large or complex objects can be stored in separate tiles for optimal management and quick access to them. It can be contours of oceans and continents, borders of countries and so on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dynamic allocation:&lt;/strong&gt;
The OSM API can automatically allocate large objects between tiles based on load and demand.&lt;/p&gt;

&lt;p&gt;An alternative approach may be to distribute large objects based on virtual tile boundaries. Instead of physically distributing objects between separate tile repositories, you can use virtual borders that join objects that are logically related to each other.&lt;/p&gt;

&lt;p&gt;In this approach, large objects can be divided into multiple tiles, and these tiles can share a repository or have references to each other. This allows objects that are physically located in different tiles to be stored together in a virtual group.&lt;/p&gt;

&lt;p&gt;Accordingly, you can have each tile as a separate git repository, but they can reference each other to form a virtual group of objects. This provides convenient access and management of large objects located in different tiles, while maintaining the flexibility and speed of git.&lt;/p&gt;

&lt;p&gt;The choice of approach to the distribution of large objects between tiles depends on the specific needs and requirements for performance, availability, and manageability of the data.&lt;/p&gt;

&lt;h2 id=&quot;notes&quot;&gt;Notes&lt;/h2&gt;

&lt;p&gt;It is important to note that this description is general and specific details and implementation may depend on your specific OSM 2.0 API implementation and your needs.&lt;/p&gt;

&lt;p&gt;When working with the OSM API using git, make sure you follow OSM’s rules and policies for making changes to the global database. Keeping a change history and using git correctly will help manage these changes and ensure data security.&lt;/p&gt;

&lt;p&gt;If you have specific questions about using the OSM API with git or need more details, please let me know and I’ll try to give you more details.&lt;/p&gt;
      </content>
      <author>
        <name>Andrii Holovin</name>
      </author>

      
        <category term="OSM"/>
        <category term="API"/>
        <category term="git"/>
        <category term="yaml"/>
        <category term="yml"/>
        <summary type="html">The OSM 2.0 API (OpenStreetMap) is an application programming interface that provides access to geospatial data and its change history stored with git. The basic data structure in OSM should be based on dividing the globe into tiles, each of which is a separate git repository.</summary>
      
    </entry>
  
</feed>
