I wanted to take a look at ksonnet and kustomize but felt I should write about what I know best first, and that’s Helm. All three tools are trying to tackle the same basic problem: enabling the re-use of Kubernetes configuration where typically we want some level of customisation each time we use the configuration, whether that’s to reflect the latest deployment or the environment we’re deploying to.
The starting point for Helm is a client binary called, unsurprisingly, helm
, and is available from brew as kubernetes-helm
. The current version of Helm also has a server-side component called Tiller and that is deployed by executing helm init --wait
(the wait
flag indicates to wait until the Tiller pod has started before returning). Note that Helm is pretty picky about having matching versions of client and server.
In keeping with Kubernetes nautical theme, Helm’s unit of packaging is the chart. A new chart called test
can be easily constructed with the command helm create test
. This results in a directory called test
with the following contents:
|
- Chart.yaml - charts - templates - NOTES.txt - _helpers.tpl - deployment.yaml - ingress.yaml - service.yaml - values.yaml |
Chart.yaml
contains some basic meta-data about the chart such as name, description and version. Note that Helm draws a distinction between the version of the chart and the version of the application that the chart deploys. We’ll come back to the empty charts
directory later. The templates
directory contains the YAML files that we’d expect to find for a basic Kubernetes application and these can be extended and/or replaced as needed. As the directory name suggests though, and this is the key to reuse, the files are actually templates. If we look in deployment.yaml
we’ll see the mustache syntax used to support substitution:
|
spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} |
We can see that it’s going to use the name from the Chart.yaml
for the container name. The default contents for the Values
fields are taken from values.yaml
and typically this also contains comments describing the intent of each value. The templates use Go template syntax and support the sprig functions along with a few Helm-specific extensions. The syntax is pretty rich and supports pretty much any manipulation you’re likely to want to perform on configuration. The _helpers.tpl
file defines some helper functions that are used throughout the chart. Finally, the NOTES.txt
contains text that is output when the chart is installed, typically providing usage instructions for the installed application. This file also supports templating.
The default generated chart deploys a pod with nginx in it. If we look in values.yaml
we can see that, by default, it’s going to use the stable
image tag. We can override this at install time. For example:
|
helm install --name test-release --wait --set image.tag=latest test |
If we want to override lots of values then we can specify then via a YAML file instead. Indeed, we can specify multiple files so there might, for example, be one for production
and another for us-east-1
. The name
here is optional but, at least for scripting, it’s easier not to use one of the generated names. Note that, although the release is deployed to a namespace (and Helm isn’t capable of tracking resources that explicitly target some other namespace), the name is scoped to the tiller instance (i.e. the cluster if you only have one tiller install).
There are others commands for operating on releases: delete
, upgrade
, rollback
, list
, status
and get
, all of which do pretty much what you’d expect of them. As the upgrade
, rollback
and history
commands suggest, helm is tracking revisions of a release. Tip: if you’re in some CD pipeline and you don’t know whether you’re upgrading or installing for the first time, use helm upgrade --install
and it will do the right thing. Something else to watch out for is the --wait
option on an upgrade. This waits until you have at least replicas - maxUnavailable
pods available. Given that these are both typically one by default, don’t be surprised when it doesn’t appear to be waiting!
We’ve just installed a chart off the local filesystem but, as with any decent package manager, Helm has the concept of a repository. By default, the helm CLI is configured with the stable and incubator repositories. You can search these with helm search
or head over to Kubeapps for a shiny catalog. These show the real power of Helm when, with a simple helm install stable/wordpress
, you can re-use someone else’s hard work in defining the best practise for deploying WordPress on Kubernetes. You can add other repositories (e.g. that for Jenkins X) and you can create your own, either via a simple file server or, a read-write repository using ChartMuseum and the Monocular UI.
The packages themselves are simply .tgz
files although the helm package
command also supports things like signing the package. I said I’d come back to the charts
directory and the WordPress chart is a good example of how this can be used. The WordPress chart actually also deploys the MariaDB chart to provide persistent storage. This is achieved by placing a requirements.yaml
in the root of the chart that specifies the dependency. Dependencies can be fetched with helm dependency update
or resolved at packaging time with the --dependency-update
flag.
Jenkins X makes use of this sub-chart capability in the way it deploys environments. The repo for each environment contains a Helm chart which specifies as dependencies all of the application charts to deploy in to the environment. One downside of this approach is that you then only see a Helm release for the entire environment. Another issue with sub-charts relates to the way their values are overridden. The values for a sub-chart can be overridden by prefixing with the name or alias given in requirements.yaml
but there is no good way to get the same value in to multiple sub-charts. If you have control over the sub-charts you can write them to retrieve values from a global
scope but that doesn’t help if you’re trying to re-use someone else’s efforts.
Helm provides lots of other goodness. You can annotate resources with hooks so that they are run pre or post install, upgrade, rollback or delete. There is even a crd-install
annotation to ensure that your CRDs are created before other resources attempt to use them. You should also know helm lint
and helm test
. The latter executes resources in the chart with the test-success
or test-failure
hook annotations. You can use these to provide a detailed check of whether a release was successfully deployed.
An overview of Helm wouldn’t be complete without a reference to Helm v3. At the beginning I indicated that the current version deploys a server-side component. That is particularly problematic when it comes to security as, even if you enable mutual-TLS, any user that can connect can perform any action supported by the service account under which Tiller is running, losing you all of Kubernetes’ RBAC benefits. You can mitigate the problem by installing a Tiller per namespace but you’re still left managing certificates. IBM Cloud Private implemented their own Tiller to do identity assertion in order to overcome this problem. Jenkins X now supports using helm template
to render charts client-side before deploying with kubectl
. This means that you don’t get any of the Helm release management but then that is handled by Jenkins X anyway. Helm 3 promises to do away with Tiller but still share release information server-side via CRDs. Sadly as James alludes to in his blog post, there’s not a lot of public progress being made on Helm 3 at the moment.