Installing to a local machine
Installing Charmed Kubernetes on a single machine is possible for the purposes of testing and development.
However, be aware that the full deployment of Charmed Kubernetes has system requirements which may exceed a standard laptop or desktop machine. It is only recommended for a machine with 32GB RAM and 250GB of SSD storage.
<span class="p-notification__title">Note:</span>
<p class="p-notification__message">If you don't meet these requirements or want a lightweight way to develop on pure Kubernetes, we recommend <a href="https://microk8s.io/">MicroK8s</a></p>
In order to run locally, you will need a local cloud. This can be achieved by using lightweight containers managed by LXD. LXD version 3.0 or better is required.
1. Set up LXD
If LXD has not previously been installed
LXD 3.0 or above should be installed from a snap and configured for Charmed Kubernetes
Install LXD
sudo snap install lxd
Run the LXD init script
/snap/bin/lxd init
The init script itself may vary depending on the version of LXD. The important configuration options for the installer are:
- Networking: Do NOT enable ipv6 networking on the bridge interface
- Storage Pool: Use the ‘dir’ storage type
You can now move on to the next step
If LXD is already installed
If you installed LXD from a snap, you can skip this step (but if necessary, you
may need to alter the default profile). If your system
had LXD pre-installed, or you have installed it from the archive (i.e. with
apt install
), you will need to migrate to the snap version.
If you aren’t sure whether LXD is installed, you can check installed snaps with:
snap list | grep lxd
and installed deb packages with:
dpkg -s lxd | grep Status
If you do have the deb version of LXD installed, you should migrate to the snap version after it has been installed. The snap includes a script to do this for you:
sudo snap install lxd
sudo /snap/bin/lxd.migrate
This will move all container specific data to the snap version and clean up the unused Debian packages, which may take a few minutes.
If LXD was installed, but never used, there will be no data in the default profile, so you should now initialise LXD:
sudo lxd init
Currently, Charmed Kubernetes only supports dir
as a storage option and
does not support ipv6, which should be set to none
from the init script.
Additional profiles will be added automatically to LXD to support the
requirements of Charmed Kubernetes.
2. Install Juju
Juju should be installed from a snap:
sudo snap install juju --classic
Juju comes preconfigured to work with LXD. A cloud created by using LXD
containers on the local machine is known as localhost
to Juju. To begin, you
need to create a Juju controller for this cloud:
juju bootstrap localhost
Juju creates a default model, but it is useful to create a new model for each project:
juju add-model k8s
3. Deploy Charmed Kubernetes
All that remains is to deploy Charmed Kubernetes. A simple install can be achieved with one command:
juju deploy charmed-kubernetes
This will install the latest stable version of Charmed Kubernetes with the default components and configuration. If you wish to customise this install (which may be helpful if you are close to the system requirements), please see the main install page.
Next Steps
Now you have a cluster up and running, check out the Operations guide for how to use it!
Troubleshooting
I get an error message when running lxc or lxd init
The most common cause of this message:
Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: permission denied
…is that either you have not run lxd init
, or you are logged in as a user
who is not part of the lxd
group (the user installing the snap is
automatically added).
To add the current user to the relevant group:
sudo usermod -a -G lxd $USER
You may need to start a new shell (or logout and login) for this to take effect:
newgrp lxd
Confirm CNIs do not need specific kernel parameters unsupported by the lxd-profile
If the CNI pods fail to start, see notes on the specific CNI page.
CNIs like Cilium and Calico need access to /sys/fs/bpf
, but that
mountpoint is not supported by the juju’s validation check
for the charm specific lxd-profile.yaml
. See CNI Overview for more
details.
Services fail to start with errors related to missing files in the /proc filesystem
For example, systemctl status snap.kube-proxy.daemon
may report the following:
Error: open /proc/sys/net/netfilter/nf_conntrack_max: no such file or directory
This is most commonly caused when lxd-profile.yaml is not applied.
Verify the profile in use by the kubernetes-worker
charm:
lxc profile list
lxc profile show juju-[model]-kubernetes-worker-[revision]
Identify any missing fields from the above lxd-profile.yaml
file and add them
as needed with:
lxc profile edit juju-[model]-kubernetes-worker-[revision]
You may need to remove and re-add the affected unit for the changes to take effect:
juju remove-unit kubernetes-worker/[n]
juju add-unit kubernetes-worker
Kubelet fails to start with errors related to inotify_add_watch
For example, systemctl status snap.kubelet.daemon.service
may report the
following error:
kubelet.go:1414] "Failed to start cAdvisor" err="inotify_add_watch /sys/fs/cgroup/cpu,cpuacct: no space left on device"
This problem usually is related to the kernel parameters,
fs.inotify.max_user_instances
and fs.inotify.max_user_watches
.
At first, you should increase their values on the machine that is hosting the Charmed Kubernetes installation:
sysctl -w fs.inotify.max_user_instances=8192
sysctl -w fs.inotify.max_user_watches=1048576
Then the new values should be applied to the worker units:
juju config kubernetes-worker sysctl="{ fs.inotify.max_user_instances=8192 }"
juju config kubernetes-worker sysctl="{ fs.inotify.max_user_watches=1048576 }"
Calico is blocked with warning about ignore-loose-rpf
Calico may be blocked with status: ignore-loose-rpf config is in conflict with rp_filter value
.
If the kernel net.ipv4.conf.all.rp_filter
value is set to 2, Calico will complain,
because it expects the kernel to have strict reverse path forwarding set (ie. value be 0 or 1) for security.
In LXD containers, it’s not possible to manipulate the value; it’s dependent on the host.
In this situation we can set the charm config ignore-loose-rpf=true
.
juju config calico ignore-loose-rpf=true
See the guide to contributing or discuss these docs in our public Mattermost channel.