I presented an introduction to Knative at Devoxx UK, the recording for which can be found below. I’m afraid I deviated somewhat from the abstract given the changes to the project in the five months since I submitted it. With only half an hour, I probably shouldn’t have tried to cover Tekton as well but I wanted to have an excuse to at least touch on Jenkins X, however briefly! The demo gods largely favoured me except hey
Archive for the ‘Kubernetes’ Category
Knative Intro @ Devoxx UK
Thursday, May 30th, 2019Debugging with Telepresence
Monday, February 11th, 2019I’ve spent the last few days trying to debug an issue on Kubernetes with an external plugin that I’ve been writing in Go for Prow. Prow’s hook component is forwarding on a GitHub webhook and the plugin mounts in various pieces of configuration from the cluster (the Prow config, GitHub OAuth token and the webhook HMAC secret). As a consequence, running the plugin standalone in my dev environment is tricky, but just the sort of scenario that Telepresence is designed for.
The following command is all that is needed to perform a whole host of magic:
1 2 3 4 5 6 7 8 |
telepresence \ --swap-deployment my-plugin-deployment \ --expose 8888 \ --mount=/tmp/tp \ --run ./my-plugin \ --config-path /tmp/tp/etc/config/config.yaml \ --hmac-secret-file /tmp/tp/etc/webhook/hmac \ --github-token-path /tmp/tp/etc/github/oauth |
- It locates
the my-plugin-deployment
deployment already running in the cluster and scales down the number of replicas to zero. - It executes
the my-plugin
binary locally and creates a replacement deployment in the cluster that routes traffic to the local process on the exposed port. - It finds the volumes defined in the deployment and syncs their contents
to /tmp/tp
using the mount paths also specified in the deployment. - Although not needed in this scenario, it also sets up the normal Kubernetes environment variables around the process and routes network traffic back to the cluster.
Now, it was convenient in this case that the binary already exposed command line arguments for the configuration files so that I could direct them to the alternative path. Failing that, you could always use Telepresence in its--docker-run
mode and then mount the files onto the container at the expected location
And the issue I was trying to debug? I had used refresh
configAgent.Start()
logrus
info
by default). As a consequence, everything was actually working as it should and my debug statements just weren’t outputting anything!
Oracle Code One: Continuous Delivery to Kubernetes with Jenkins and Helm
Wednesday, October 31st, 2018Last week I was out in San Francisco at Oracle Code One (previously known as JavaOne). I had to wait until Thursday morning to give my session on “Continuous Delivery to Kubernetes with Jenkins and Helm”. This was the same title I presented in almost exactly the same spot back in February at IBM’s Index Conference but there were some significant differences in the content.
The first half was much the same. As you can see from the material on SlideShare and GitHub, it covers deploying Jenkins on Kubernetes via Helm and then setting up a pipeline with the Kubernetes plugin to build and deploy an application, again, using Helm. This time, I’d built a custom Jenkins image with the default set of plugins used by the Helm chart pre-installed which improved start-up times in the demo.
I had previously mounted in the Docker socket to perform the build but removed that and used kaniko instead. This highlighted one annoyance with the current approach used by the Kubernetes plugin: it uses exec
on long-running containers to execute a shell script with the commands defined in the pipeline. The default kaniko image is a scratch image containing just the executor
binary – nothing there to keep it alive, nor a shell to execute the script. In his example, Carlos uses the kaniko:debug
image which adds a busybox shell but that requires other hoops to be jumped through because the shell is not in the normal location. Instead, I built a kaniko image based on alpine.
The biggest difference from earlier in the year was, perhaps not unsurprisingly, the inclusion of Jenkins X. I hadn’t really left myself enough time to do it justice. Given the normal terrible conference wifi and the GitHub outage earlier in the week, I had recorded a demo showing initial project creation, promotion, and update. I’ve added a voiceover so you can watch it for yourself below (although you probably want to go full-screen unless you have very good eyesight!).
Introduce poetry to your Kube config with ksonnet
Monday, October 15th, 2018Returning to the 101 ways to create Kubernetes configuration theme, next up is ksonnet from the folks at Heptio. (I have no doubt that there are 101 ways to create Kubernetes configuration but I’m afraid I don’t really intend to cover all of them on this blog!) ksonnet has a different take yet again from Helm and kustomize. In many ways, it is more powerful than either of them but that power comes at the cost of a fairly steep learning curve.
The name is derived from Jsonnet, a data templating language that came out of Google back in 2014. Jsonnet essentially extends JSON with a scripting syntax that supports the definition of programming constructs such as variables, functions, and objects. The ‘Aha!’ moment for me with ksonnet was in realizing that it could be used as a simple template structure in much the same way as Helm. You start with some Kubernetes configuration in JSON format (and yq is your friend if you need to convert from YAML to JSON first) and from there you can extract parameters. I say ‘it could’ because you’d typically only take this approach if you were actually converting existing configuration but realizing this helped me get beyond some of the slightly strange syntax you see in generated files.
As usual, Homebrew is your starting point: brew install ksonnet/tap/ks
. ksonnet has an understanding of the different environments to which an application is deployed and, when you issue ks init myapp
, it takes the cluster that your current kube config is pointing at as the default environment (although you can override this with --context
).
ksonnet then has the concept of ‘prototypes’ which are templates for generating particular types of application component when supplied with suitable parameters. These are provided by ‘packages’ which, in turn, come from a ‘registry’ stored on GitHub. Stealing from the tutorial, we can generate code for a simple deployment and service with the deployed-service
prototype giving the image name and service type as parameters e.g.
1 2 3 |
ks generate deployed-service guestbook-ui \ --image gcr.io/heptio-images/ks-guestbook-demo:0.1 \ --type ClusterIP |
At this point, we can use ks show default
to return the YAML that would be generated or ks show apply
to actually apply it to the default environment. I highly recommend doing the tutorial first and not the web-based tour as it shows you that you can get a long way with ksonnet without actually editing, or even looking at, any of the generated files. For example, you can use ks env add
to create another environment and then ks param set
to override the values of parameters for a particular environment as you might with Helm or kustomize.
Of course, the real power comes when you drop into the code and make use of ksonnet features like parts and modules to enable greater reuse of configuration in your application. At that point though, you really should take the time to learn jsonnet properly!
kail: kubernetes tail
Friday, October 12th, 2018A short post for today but it relates to a tool that every Kubernetes user should have in their toolbox: kail. Although most users probably know that kubectl logs
will, by default, show the logs for all containers in a pod and that it has --tail
and -f
options, fewer probably know that is has a -l
option to select pods based on label. Kail takes tailing Kubernetes logs to a whole new level.
For Homebrew users, it’s available via brew install boz/repo/kail
. When executed without any arguments it tails logs for all containers in the cluster which is probably not what you want unless your cluster is very quiet! There are, however, flags to let you filter not just on pod, container, and label, but also namespace, deployment, replica set, ingress, service, or node. Flags of the same type are ORed together, different flags are ANDed. And that’s pretty much all there is to it but anyone who finds themselves watching the logs of any moderately complex application will wonder how they lived without it!
Kustomizing Kubernetes Konfiguration
Thursday, October 11th, 2018Finally, I get to write that blog post on kustomize! kustomize is yet another tool attempting to solve the problem of how to make Kubernetes configuration re-usable. Unlike, say, Helm, kustomize allows configuration to be overridden at consumption time without necessarily having allowed for it when the configuration was originally produced. This is great if you are attempting to re-use someone else’s configuration. On the flip-side, you might prefer to use something like Helm if you actually want to limit the points of variability e.g. to ensure standardization across environments or applications.
You know the drill by now: the go binary CLI can be obtained via brew install kustomize
. There is one main command and that is kustomize build
. That expects to be pointed at a directory or URL containing a kustomization.yaml
file. Running the command outputs the required Kubernetes resources to standard output where they can then be piped to kubectl
if desired.
The kustomization.yaml
can contain the following directives:
namespace
– to add a namespace to all the output resourcesnamePrefix
– to add a prefix to all the resource namescommonLabels
– to add a set of labels to all resources (and selectors)commonAnnotations
– to add a set of annotations to all resourcesresources
– an explicit list of YAML files to be customizedconfigMapGenerator
– to construct ConfigMaps on the flysecretGenerator
– to construct Secrets via arbitrary commandspatches
– YAML files containing partial resource definitions to be overlayed on resources with matching namespatchesJson6902
– applies a JSON patch that can add or remove valuescrds
– lists YAML files defining CRDs (so that, if their names are updated, resources using them are also updated)vars
– used to define variables that reference resource/files for replacement in places that kustomize doesn’t handle automaticallyimageTags
– updates the tag for images matching a given name
That’s a pretty comprehensive toolbox for manipulating configuration. The only directive I didn’t mention was bases
with which you can build a hierarchy of customizations. The prototypical example given is of a base configuration with different customizations for each deployment environment. Note that you can have multiple bases, so aws-east-staging
might extend both aws-east
and staging
.
One of the refreshing things about kustomize is that it explicitly calls out a set of features that it doesn’t intend to implement. This introduces the only other command that the CLI supports: kustomize edit
. Given that one of the stated restrictions is that kustomize does not provide any mechanism for parameterising individual builds, the intent of this command is to allow you to script modifications to your kustomization.yaml
prior to calling build
.
It’s worth noting that kustomize can be used in combination with Helm. For example, you could run helm template
and then use kustomize to make additional modifications that are not supported by the original chart. You can also use them in the reverse order. The Helmfile docs describe how to use Helmfile’s hooks to drive a script that will use kustomize to construct the required YAML, but then wrap it in a shell chart so that you get the benefit of Helm’s releases.
Helmfile and friends
Monday, October 8th, 2018Having written a post on Helm, I feel obliged to follow it up with one on Helmfile, a project that addresses some of the issues that I identified with deploying multiple Helm charts. In particular, it provides an alternative approach to the umbrella chart mechanism that Jenkins X uses for deploying the set of charts that represent an environment.
Yet again, we have a go binary, available via brew install helmfile
. At its most basic, we then have a helmfile.yaml
that specifies a set of releases with a name, chart, namespace and override values for each. A helmfile sync
will then perform an install/upgrade for all the releases defined in the file. One thing I failed to mention in my Helm post was that Helm supports plugins on the client side. One such plugin is the helm-diff plugin which, as you’d probably guess from the name, gives you a diff between the current state of a release and what it would look like after an upgrade or rollback. The plugin is installed with:
1 |
helm plugin install https://github.com/databus23/helm-diff --version master |
With this in place, we can now use helmfile diff
to see the changes that would take place across all of our releases. The helmfile apply
command combines this with a sync
to conditionally perform an upgrade only if there are differences. There is a set of other helmfile
commands that all perform aggregate operations across all the releases: delete
, template
, lint
, status
and test
.
So far so good but nothing that couldn’t be achieved with a pretty short bash script. Where things get more interesting is that the helmfile.yaml
is actually a template in the same way as the templates in a Helm chart. This means we can start to do more interesting things like defining values in one place and then reusing them across multiple releases. Helmfile has the explicit concept of an environment, passed in as a parameter on the CLI. We can use a single YAML file and use templating to have different values apply in each environment or, in the extreme, only deploy charts in some environments.
Helmfile also has some tricks up its sleeve when it comes to secrets. Most trivially, if your CI allows you to configure secrets via environment variables you can consume these directly in the helmfile.yaml
. You can also store secrets in version control encrypted in a YAML file and then have another Helm plugin, helm-secrets, decrypt them with PGP or AWS KMS.
Helmfile has some features to help you as the size of your deployment grows. You can, for example, specify a selector on commands to only apply them to matching releases. This can be helpful if deploying all the changes at once is likely to create too much churn in your cluster. You can also split the file into multiple files in one directory (executed in lexical order) or over multiple directories (accessed via a glob syntax).
For anything else, there are prepare
and cleanup
hooks to allow you to execute arbitrary commands before and after deployment. Oh, and if you’re using a containerized deployment pipeline, it’s available packaged up in an image, ready for use. Finally, if you don’t take to Helmfile, take a look at Helmsman instead.
Helm: the Package Manager for Kubernetes
Friday, October 5th, 2018I wanted to take a look at ksonnet and kustomize but felt I should write about what I know best first, and that’s Helm. All three tools are trying to tackle the same basic problem: enabling the re-use of Kubernetes configuration where typically we want some level of customisation each time we use the configuration, whether that’s to reflect the latest deployment or the environment we’re deploying to.
The starting point for Helm is a client binary called, unsurprisingly, helm
, and is available from brew as kubernetes-helm
. The current version of Helm also has a server-side component called Tiller and that is deployed by executing helm init --wait
(the wait
flag indicates to wait until the Tiller pod has started before returning). Note that Helm is pretty picky about having matching versions of client and server.
In keeping with Kubernetes nautical theme, Helm’s unit of packaging is the chart. A new chart called test
can be easily constructed with the command helm create test
. This results in a directory called test
with the following contents:
1 2 3 4 5 6 7 8 9 |
- Chart.yaml - charts - templates   - NOTES.txt   - _helpers.tpl   - deployment.yaml   - ingress.yaml   - service.yaml - values.yaml |
Chart.yaml
contains some basic meta-data about the chart such as name, description and version. Note that Helm draws a distinction between the version of the chart and the version of the application that the chart deploys. We’ll come back to the empty charts
directory later. The templates
directory contains the YAML files that we’d expect to find for a basic Kubernetes application and these can be extended and/or replaced as needed. As the directory name suggests though, and this is the key to reuse, the files are actually templates. If we look in deployment.yaml
we’ll see the mustache syntax used to support substitution:
1 2 3 4 5 |
spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} |
We can see that it’s going to use the name from the Chart.yaml
for the container name. The default contents for the Values
fields are taken from values.yaml
and typically this also contains comments describing the intent of each value. The templates use Go template syntax and support the sprig functions along with a few Helm-specific extensions. The syntax is pretty rich and supports pretty much any manipulation you’re likely to want to perform on configuration. The _helpers.tpl
file defines some helper functions that are used throughout the chart. Finally, the NOTES.txt
contains text that is output when the chart is installed, typically providing usage instructions for the installed application. This file also supports templating.
The default generated chart deploys a pod with nginx in it. If we look in values.yaml
we can see that, by default, it’s going to use the stable
image tag. We can override this at install time. For example:
1 |
helm install --name test-release --wait --set image.tag=latest test |
If we want to override lots of values then we can specify then via a YAML file instead. Indeed, we can specify multiple files so there might, for example, be one for production
and another for us-east-1
. The name
here is optional but, at least for scripting, it’s easier not to use one of the generated names. Note that, although the release is deployed to a namespace (and Helm isn’t capable of tracking resources that explicitly target some other namespace), the name is scoped to the tiller instance (i.e. the cluster if you only have one tiller install).
There are others commands for operating on releases: delete
, upgrade
, rollback
, list
, status
and get
, all of which do pretty much what you’d expect of them. As the upgrade
, rollback
and history
commands suggest, helm is tracking revisions of a release. Tip: if you’re in some CD pipeline and you don’t know whether you’re upgrading or installing for the first time, use helm upgrade --install
and it will do the right thing. Something else to watch out for is the --wait
option on an upgrade. This waits until you have at least replicas - maxUnavailable
pods available. Given that these are both typically one by default, don’t be surprised when it doesn’t appear to be waiting!
We’ve just installed a chart off the local filesystem but, as with any decent package manager, Helm has the concept of a repository. By default, the helm CLI is configured with the stable and incubator repositories. You can search these with helm search
or head over to Kubeapps for a shiny catalog. These show the real power of Helm when, with a simple helm install stable/wordpress
, you can re-use someone else’s hard work in defining the best practise for deploying WordPress on Kubernetes. You can add other repositories (e.g. that for Jenkins X) and you can create your own, either via a simple file server or, a read-write repository using ChartMuseum and the Monocular UI.
The packages themselves are simply .tgz
files although the helm package
command also supports things like signing the package. I said I’d come back to the charts
directory and the WordPress chart is a good example of how this can be used. The WordPress chart actually also deploys the MariaDB chart to provide persistent storage. This is achieved by placing a requirements.yaml
in the root of the chart that specifies the dependency. Dependencies can be fetched with helm dependency update
or resolved at packaging time with the --dependency-update
flag.
Jenkins X makes use of this sub-chart capability in the way it deploys environments. The repo for each environment contains a Helm chart which specifies as dependencies all of the application charts to deploy in to the environment. One downside of this approach is that you then only see a Helm release for the entire environment. Another issue with sub-charts relates to the way their values are overridden. The values for a sub-chart can be overridden by prefixing with the name or alias given in requirements.yaml
but there is no good way to get the same value in to multiple sub-charts. If you have control over the sub-charts you can write them to retrieve values from a global
scope but that doesn’t help if you’re trying to re-use someone else’s efforts.
Helm provides lots of other goodness. You can annotate resources with hooks so that they are run pre or post install, upgrade, rollback or delete. There is even a crd-install
annotation to ensure that your CRDs are created before other resources attempt to use them. You should also know helm lint
and helm test
. The latter executes resources in the chart with the test-success
or test-failure
hook annotations. You can use these to provide a detailed check of whether a release was successfully deployed.
An overview of Helm wouldn’t be complete without a reference to Helm v3. At the beginning I indicated that the current version deploys a server-side component. That is particularly problematic when it comes to security as, even if you enable mutual-TLS, any user that can connect can perform any action supported by the service account under which Tiller is running, losing you all of Kubernetes’ RBAC benefits. You can mitigate the problem by installing a Tiller per namespace but you’re still left managing certificates. IBM Cloud Private implemented their own Tiller to do identity assertion in order to overcome this problem. Jenkins X now supports using helm template
to render charts client-side before deploying with kubectl
. This means that you don’t get any of the Helm release management but then that is handled by Jenkins X anyway. Helm 3 promises to do away with Tiller but still share release information server-side via CRDs. Sadly as James alludes to in his blog post, there’s not a lot of public progress being made on Helm 3 at the moment.