Archive for the ‘Kubernetes’ Category

Helm: for better or worse?

Monday, June 9th, 2025

A few weeks ago, one of my colleagues at JUXT gave a presentation on Helm, and this started me thinking back over my own experiences with the tool. It appears I already had a lot to say on the subject back in 2018! Since then, I’ve made extensive use of Helm at CloudBees where we had an umbrella chart to deploy the entire SaaS platform, and at R3. It’s that latter experience that I’m going to talk about in this post.

Helm and Corda

The main Helm chart in question is the one for the R3’s Corda DLT, which you can find on GitHub. The corda.net website has, unfortunately, been sunset, but my blog post describing the rationale for using Helm is still available on the Internet Archive. Another article explains how the chart can be used, along with those for Kafka and Postgres, to spin up a complete Corda deployment quickly.

As an aside, it was a conscious decision not to provide a chart that packaged Corda along with those Kafka and PostgreSQL prereqs. The concern was that customers would take this and deploy it to production without thinking about what a production deployment of Kafka or Postgres entails. Not to mention wanting to make it clear that these were not components that we, as a company, were providing support for.

As a cautionary tale: despite its name, the corda-dev-prereqs chart referenced in that last article (which creates a decidedly non-HA deployment of Kafka and PostgreSQL) found itself being deployed in places it shouldn’t have been…

More Go than YAML

Whilst the consumer experience with the Helm chart was pretty good, things weren’t so rosy on the authoring side. The combined novelty of Kubernetes configuration and Go templating was just too much for many developers. While some did engage, ownership of the chart definitely remained with the DevOps team that authored the initial version, rather than the application developers.

The complexity of the chart also ramped up rapidly. With multiple services requiring almost identical configuration, we soon moved from YAML with embedded Go to Go with embedded YAML! That problem is not unique to Helm; I remember having the same issue with JSPs many moons ago.

The lack of typing, combined with the fact that all functions return strings, started to make the chart fragile, particularly without any good testing of the output with different override values.

Two charts are not better than one

If you look at the GitHub repository, you might wonder why most of the logic for the chart sits in a separate library chart (corda-lib) on which the main corda chart depends. What you can’t see is that we had a separate Helm chart for use by paying customers. This was largely identical to the open-source chart, but included some additional configuration overrides. The library chart was an attempt to share as much logic as possible between the two.

What we couldn’t share was the values.yaml itself and the corresponding JSON schema, and as a consequence, there was always a certain amount of double fixing that went on. What we really needed was a first-class mechanism for extending a chart.

Helm hooks

Although there were other niggles, the last issue I’m going to talk about is the use of Helm hooks. Corda has two mechanisms for bootstrapping PostgreSQL and Kafka: an administrator can use the CLI to generate the required SQL and topic definitions, or the chart can automatically perform the setup when the chart is installed. We expected customers to use the former mechanism, at least in production, but the latter was used in most of our development and testing, and by the services team in pilot projects. The automated approach used a pre-install hook to drive a containerised version of the CLI to perform the setup.

So far, so good. We then started to look at using ArgoCD to deploy the chart. ArgoCD doesn’t install Helm charts directly, instead, it renders the template and then applies the Kubernetes configuration. It does have some understanding of Helm hooks, converting them into ArgoCD waves, but it doesn’t distinguish between install and upgrade hooks. This would lead ArgoCD to try to rerun the setup during an upgrade.

Now, here some responsibility must lie with the Corda team, as those setup commands should have been idempotent, but they weren’t. The answer, for us, was to use an alternative to ArgoCD (worth a separate post), but our customers might not have the luxury of that choice.

Summary

Does all of the above mean that I think Helm is a bad choice? As always, it depends. For ‘packaged’ Kubernetes configuration, I still believe it’s a better choice than requiring consumers to understand your YAML sufficiently to be able to apply suitable modifications with Kustomize. In particular, pushing Kustomize is opening up your support organisation to having to deal with customers basically using any arbitrary YAML to deploy your solution.

In the case of Corda, we underinvested in building the skills to make the best of Helm. Fundamentally, though, I’d suggest that we simply outgrew it. If I were still working on its evolution, the next step would undoubtedly have been to implement an operator and write all of that complicated logic in a language that properly supports testing and reuse.

Knative Intro @ Devoxx UK

Thursday, May 30th, 2019

I presented an introduction to Knative at Devoxx UK, the recording for which can be found below. I’m afraid I deviated somewhat from the abstract given the changes to the project in the five months since I submitted it. With only half an hour, I probably shouldn’t have tried to cover Tekton as well but I wanted to have an excuse to at least touch on Jenkins X, however briefly! The demo gods largely favoured me except when hey failed to return (not the part of the demo I was expecting to fail!). The script and source for the demo are on GitHub although I’m afraid I haven’t attempted to abstract them away from the Docker Hub/GCP accounts.

Debugging with Telepresence

Monday, February 11th, 2019

I’ve spent the last few days trying to debug an issue on Kubernetes with an external plugin that I’ve been writing in Go for Prow. Prow’s hook component is forwarding on a GitHub webhook and the plugin mounts in various pieces of configuration from the cluster (the Prow config, GitHub OAuth token and the webhook HMAC secret). As a consequence, running the plugin standalone in my dev environment is tricky, but just the sort of scenario that Telepresence is designed for.

The following command is all that is needed to perform a whole host of magic:

  • It locates the my-plugin-deployment deployment already running in the cluster and scales down the number of replicas to zero.
  • It executes the my-plugin binary locally and creates a replacement deployment in the cluster that routes traffic to the local process on the exposed port.
  • It finds the volumes defined in the deployment and syncs their contents to /tmp/tp using the mount paths also specified in the deployment.
  • Although not needed in this scenario, it also sets up the normal Kubernetes environment variables around the process and routes network traffic back to the cluster.

Now, it was convenient in this case that the binary already exposed command line arguments for the configuration files so that I could direct them to the alternative path. Failing that, you could always use Telepresence in its--docker-run mode and then mount the files onto the container at the expected location.

And the issue I was trying to debug? I had used the refresh plugin as my starting point and this comment turned out to be very misleading. The call to configAgent.Start() does actually set the logrus log level based on the prow configuration (to info by default). As a consequence, everything was actually working as it should and my debug statements just weren’t outputting anything!

Oracle Code One: Continuous Delivery to Kubernetes with Jenkins and Helm

Wednesday, October 31st, 2018

Last week I was out in San Francisco at Oracle Code One (previously known as JavaOne). I had to wait until Thursday morning to give my session on “Continuous Delivery to Kubernetes with Jenkins and Helm”. This was the same title I presented in almost exactly the same spot back in February at IBM’s Index Conference but there were some significant differences in the content.

https://www.slideshare.net/davidcurrie/continuous-delivery-to-kubernetes-with-jenkins-and-helm-120590081

The first half was much the same. As you can see from the material on SlideShare and GitHub, it covers deploying Jenkins on Kubernetes via Helm and then setting up a pipeline with the Kubernetes plugin to build and deploy an application, again, using Helm. This time, I’d built a custom Jenkins image with the default set of plugins used by the Helm chart pre-installed which improved start-up times in the demo.

I had previously mounted in the Docker socket to perform the build but removed that and used kaniko instead. This highlighted one annoyance with the current approach used by the Kubernetes plugin: it uses exec on long-running containers to execute a shell script with the commands defined in the pipeline. The default kaniko image is a scratch image containing just the executor binary – nothing there to keep it alive, nor a shell to execute the script. In his example, Carlos uses the kaniko:debug image which adds a busybox shell but that requires other hoops to be jumped through because the shell is not in the normal location. Instead, I built a kaniko image based on alpine.

The biggest difference from earlier in the year was, perhaps not unsurprisingly, the inclusion of Jenkins X. I hadn’t really left myself enough time to do it justice. Given the normal terrible conference wifi and the GitHub outage earlier in the week, I had recorded a demo showing initial project creation, promotion, and update. I’ve added a voiceover so you can watch it for yourself below (although you probably want to go full-screen unless you have very good eyesight!).

Introduce poetry to your Kube config with ksonnet

Monday, October 15th, 2018

Returning to the 101 ways to create Kubernetes configuration theme, next up is ksonnet from the folks at Heptio. (I have no doubt that there are 101 ways to create Kubernetes configuration but I’m afraid I don’t really intend to cover all of them on this blog!) ksonnet has a different take yet again from Helm and kustomize. In many ways, it is more powerful than either of them but that power comes at the cost of a fairly steep learning curve.

The name is derived from Jsonnet, a data templating language that came out of Google back in 2014. Jsonnet essentially extends JSON with a scripting syntax that supports the definition of programming constructs such as variables, functions, and objects. The ‘Aha!’ moment for me with ksonnet was in realizing that it could be used as a simple template structure in much the same way as Helm. You start with some Kubernetes configuration in JSON format (and yq is your friend if you need to convert from YAML to JSON first) and from there you can extract parameters. I say ‘it could’ because you’d typically only take this approach if you were actually converting existing configuration but realizing this helped me get beyond some of the slightly strange syntax you see in generated files.

As usual, Homebrew is your starting point: brew install ksonnet/tap/ks. ksonnet has an understanding of the different environments to which an application is deployed and, when you issue ks init myapp, it takes the cluster that your current kube config is pointing at as the default environment (although you can override this with --context).

ksonnet then has the concept of ‘prototypes’ which are templates for generating particular types of application component when supplied with suitable parameters. These are provided by ‘packages’ which, in turn, come from a ‘registry’ stored on GitHub. Stealing from the tutorial, we can generate code for a simple deployment and service with the deployed-service prototype giving the image name and service type as parameters e.g.

At this point, we can use ks show default to return the YAML that would be generated or ks show apply to actually apply it to the default environment. I highly recommend doing the tutorial first and not the web-based tour as it shows you that you can get a long way with ksonnet without actually editing, or even looking at, any of the generated files. For example, you can use ks env add to create another environment and then ks param set to override the values of parameters for a particular environment as you might with Helm or kustomize.

Of course, the real power comes when you drop into the code and make use of ksonnet features like parts and modules to enable greater reuse of configuration in your application. At that point though, you really should take the time to learn jsonnet properly!

kail: kubernetes tail

Friday, October 12th, 2018

A short post for today but it relates to a tool that every Kubernetes user should have in their toolbox: kail. Although most users probably know that kubectl logs will, by default, show the logs for all containers in a pod and that it has --tail and -f options, fewer probably know that is has a -l option to select pods based on label. Kail takes tailing Kubernetes logs to a whole new level.

For Homebrew users, it’s available via brew install boz/repo/kail. When executed without any arguments it tails logs for all containers in the cluster which is probably not what you want unless your cluster is very quiet! There are, however, flags to let you filter not just on pod, container, and label, but also namespace, deployment, replica set, ingress, service, or node. Flags of the same type are ORed together, different flags are ANDed. And that’s pretty much all there is to it but anyone who finds themselves watching the logs of any moderately complex application will wonder how they lived without it!

Kustomizing Kubernetes Konfiguration

Thursday, October 11th, 2018

Finally, I get to write that blog post on kustomize! kustomize is yet another tool attempting to solve the problem of how to make Kubernetes configuration re-usable. Unlike, say, Helm, kustomize allows configuration to be overridden at consumption time without necessarily having allowed for it when the configuration was originally produced. This is great if you are attempting to re-use someone else’s configuration. On the flip-side, you might prefer to use something like Helm if you actually want to limit the points of variability e.g. to ensure standardization across environments or applications.

You know the drill by now: the go binary CLI can be obtained via brew install kustomize. There is one main command and that is kustomize build. That expects to be pointed at a directory or URL containing a kustomization.yaml file. Running the command outputs the required Kubernetes resources to standard output where they can then be piped to kubectl if desired.

The kustomization.yaml can contain the following directives:

  • namespace – to add a namespace to all the output resources
  • namePrefix – to add a prefix to all the resource names
  • commonLabels – to add a set of labels to all resources (and selectors)
  • commonAnnotations – to add a set of annotations to all resources
  • resources – an explicit list of YAML files to be customized
  • configMapGenerator – to construct ConfigMaps on the fly
  • secretGenerator – to construct Secrets via arbitrary commands
  • patches – YAML files containing partial resource definitions to be overlayed on resources with matching names
  • patchesJson6902 – applies a JSON patch that can add or remove values
  • crds – lists YAML files defining CRDs (so that, if their names are updated, resources using them are also updated)
  • vars – used to define variables that reference resource/files for replacement in places that kustomize doesn’t handle automatically
  • imageTags – updates the tag for images matching a given name

That’s a pretty comprehensive toolbox for manipulating configuration. The only directive I didn’t mention was bases with which you can build a hierarchy of customizations. The prototypical example given is of a base configuration with different customizations for each deployment environment. Note that you can have multiple bases, so aws-east-staging might extend both aws-east and staging.

One of the refreshing things about kustomize is that it explicitly calls out a set of features that it doesn’t intend to implement. This introduces the only other command that the CLI supports: kustomize edit. Given that one of the stated restrictions is that kustomize does not provide any mechanism for parameterising individual builds, the intent of this command is to allow you to script modifications to your kustomization.yaml prior to calling build.

It’s worth noting that kustomize can be used in combination with Helm. For example, you could run helm template and then use kustomize to make additional modifications that are not supported by the original chart. You can also use them in the reverse order. The Helmfile docs describe how to use Helmfile’s hooks to drive a script that will use kustomize to construct the required YAML, but then wrap it in a shell chart so that you get the benefit of Helm’s releases.

Helmfile and friends

Monday, October 8th, 2018

Having written a post on Helm, I feel obliged to follow it up with one on Helmfile, a project that addresses some of the issues that I identified with deploying multiple Helm charts. In particular, it provides an alternative approach to the umbrella chart mechanism that Jenkins X uses for deploying the set of charts that represent an environment.

Yet again, we have a go binary, available via brew install helmfile. At its most basic, we then have a helmfile.yaml that specifies a set of releases with a name, chart, namespace and override values for each. A helmfile sync will then perform an install/upgrade for all the releases defined in the file. One thing I failed to mention in my Helm post was that Helm supports plugins on the client side. One such plugin is the helm-diff plugin which, as you’d probably guess from the name, gives you a diff between the current state of a release and what it would look like after an upgrade or rollback. The plugin is installed with:

With this in place, we can now use helmfile diff to see the changes that would take place across all of our releases. The helmfile apply command combines this with a sync to conditionally perform an upgrade only if there are differences. There is a set of other helmfile commands that all perform aggregate operations across all the releases: delete, template, lint, status and test.

So far so good but nothing that couldn’t be achieved with a pretty short bash script. Where things get more interesting is that the helmfile.yaml is actually a template in the same way as the templates in a Helm chart. This means we can start to do more interesting things like defining values in one place and then reusing them across multiple releases. Helmfile has the explicit concept of an environment, passed in as a parameter on the CLI. We can use a single YAML file and use templating to have different values apply in each environment or, in the extreme, only deploy charts in some environments.

Helmfile also has some tricks up its sleeve when it comes to secrets. Most trivially, if your CI allows you to configure secrets via environment variables you can consume these directly in the helmfile.yaml. You can also store secrets in version control encrypted in a YAML file and then have another Helm plugin, helm-secrets, decrypt them with PGP or AWS KMS.

Helmfile has some features to help you as the size of your deployment grows. You can, for example, specify a selector on commands to only apply them to matching releases. This can be helpful if deploying all the changes at once is likely to create too much churn in your cluster. You can also split the file into multiple files in one directory (executed in lexical order) or over multiple directories (accessed via a glob syntax).

For anything else, there are prepare and cleanup hooks to allow you to execute arbitrary commands before and after deployment. Oh, and if you’re using a containerized deployment pipeline, it’s available packaged up in an image, ready for use. Finally, if you don’t take to Helmfile, take a look at Helmsman instead.