Archive for the ‘CI/CD’ Category

Oracle Code One: Continuous Delivery to Kubernetes with Jenkins and Helm

Wednesday, October 31st, 2018

Last week I was out in San Francisco at Oracle Code One (previously known as JavaOne). I had to wait until Thursday morning to give my session on “Continuous Delivery to Kubernetes with Jenkins and Helm”. This was the same title I presented in almost exactly the same spot back in February at IBM’s Index Conference but there were some significant differences in the content.

https://www.slideshare.net/davidcurrie/continuous-delivery-to-kubernetes-with-jenkins-and-helm-120590081

The first half was much the same. As you can see from the material on SlideShare and GitHub, it covers deploying Jenkins on Kubernetes via Helm and then setting up a pipeline with the Kubernetes plugin to build and deploy an application, again, using Helm. This time, I’d built a custom Jenkins image with the default set of plugins used by the Helm chart pre-installed which improved start-up times in the demo.

I had previously mounted in the Docker socket to perform the build but removed that and used kaniko instead. This highlighted one annoyance with the current approach used by the Kubernetes plugin: it uses exec on long-running containers to execute a shell script with the commands defined in the pipeline. The default kaniko image is a scratch image containing just the executor binary – nothing there to keep it alive, nor a shell to execute the script. In his example, Carlos uses the kaniko:debug image which adds a busybox shell but that requires other hoops to be jumped through because the shell is not in the normal location. Instead, I built a kaniko image based on alpine.

The biggest difference from earlier in the year was, perhaps not unsurprisingly, the inclusion of Jenkins X. I hadn’t really left myself enough time to do it justice. Given the normal terrible conference wifi and the GitHub outage earlier in the week, I had recorded a demo showing initial project creation, promotion, and update. I’ve added a voiceover so you can watch it for yourself below (although you probably want to go full-screen unless you have very good eyesight!).

Helmfile and friends

Monday, October 8th, 2018

Having written a post on Helm, I feel obliged to follow it up with one on Helmfile, a project that addresses some of the issues that I identified with deploying multiple Helm charts. In particular, it provides an alternative approach to the umbrella chart mechanism that Jenkins X uses for deploying the set of charts that represent an environment.

Yet again, we have a go binary, available via brew install helmfile. At its most basic, we then have a helmfile.yaml that specifies a set of releases with a name, chart, namespace and override values for each. A helmfile sync will then perform an install/upgrade for all the releases defined in the file. One thing I failed to mention in my Helm post was that Helm supports plugins on the client side. One such plugin is the helm-diff plugin which, as you’d probably guess from the name, gives you a diff between the current state of a release and what it would look like after an upgrade or rollback. The plugin is installed with:

With this in place, we can now use helmfile diff to see the changes that would take place across all of our releases. The helmfile apply command combines this with a sync to conditionally perform an upgrade only if there are differences. There is a set of other helmfile commands that all perform aggregate operations across all the releases: delete, template, lint, status and test.

So far so good but nothing that couldn’t be achieved with a pretty short bash script. Where things get more interesting is that the helmfile.yaml is actually a template in the same way as the templates in a Helm chart. This means we can start to do more interesting things like defining values in one place and then reusing them across multiple releases. Helmfile has the explicit concept of an environment, passed in as a parameter on the CLI. We can use a single YAML file and use templating to have different values apply in each environment or, in the extreme, only deploy charts in some environments.

Helmfile also has some tricks up its sleeve when it comes to secrets. Most trivially, if your CI allows you to configure secrets via environment variables you can consume these directly in the helmfile.yaml. You can also store secrets in version control encrypted in a YAML file and then have another Helm plugin, helm-secrets, decrypt them with PGP or AWS KMS.

Helmfile has some features to help you as the size of your deployment grows. You can, for example, specify a selector on commands to only apply them to matching releases. This can be helpful if deploying all the changes at once is likely to create too much churn in your cluster. You can also split the file into multiple files in one directory (executed in lexical order) or over multiple directories (accessed via a glob syntax).

For anything else, there are prepare and cleanup hooks to allow you to execute arbitrary commands before and after deployment. Oh, and if you’re using a containerized deployment pipeline, it’s available packaged up in an image, ready for use. Finally, if you don’t take to Helmfile, take a look at Helmsman instead.

draft up

Friday, September 21st, 2018

For my next few posts, I thought I’d pick on some of the technologies that Jenkins X uses under the covers. The first of these is Draft, originally from Deis but it went with them to Microsoft. Draft’s aim is to streamline the process of developing code that runs on Kubernetes. It’s evolved a bit since it was originally released: having started with a client-server architecture, it is now entirely client based. There are many good reasons for this although one of the things that differentiated Draft originally was that it didn’t need anything on the developer’s machine other than Draft itself: not even Docker.

The part of Draft that Jenkins X uses is the ability to add a Dockerfile and Helm chart to an existing project. The combination of Dockerfile and Helm chart is stored in what Draft calls a ‘pack’. On running a draft create, Draft does some nifty analysis to detect the language being used in the project in order to select the appropriate pack to use. As you and I know though, language alone is not going to tell me whether I’m, say, running an executable JAR or providing a WAR file to run on an app server. Fortunately, there’s a --pack option so I can tell Draft which pack to use. The pack mechanism is nicely extensible with the ability to specify new repositories (simply a Git repo containing a packs folder). The packs used by Jenkins X (which include one for Liberty even if it isn’t very good) can be found here. Draft is also clever enough to know that, if I already have a Dockerfile or chart, I probably don’t want the one from the pack instead.

Once I have a Dockerfile and chart, the next step is to deploy my application using draft up. Draft’s expecting to find the Kubernetes context and Helm already set-up. It’ll build the Docker image and if a registry is configured push the image there. The latter isn’t compulsory though so if I’m using Docker Desktop (the new name for Docker for Mac/Windows) or have my Docker client pointing at my Minikube Docker daemon then I can just use the image out of the cache. It will then use Helm to deploy the application, passing through overrides for image.repository and image.tag to reference the image that’s just been built (using a unique tag). It will even set up an imagePullSecret if necessary. You can use draft logs to see the output from the build and deploy.

Originally, Draft came with a ‘watch’ mode where it would attempt to detect file updates and automatically rebuild. Thankfully that now seems to have been dropped as, with a completely unoptimised build cycle, it really wasn’t practical. The Java pack is particularly bad as the provided Dockerfile doesn’t even attempt to cache the Maven dependencies. Now you simply run draft up again to trigger a rebuild (which you could hook up to your editor’s save option if you really wished).

The last part of the Draft developer experience is draft connect which pipes the logs from any deployed containers to your terminal, along with setting up port forwarding. Sensibly, it allows you configure the local ports that you want to forward to and this, along with other configuration, can be stored in a draft.toml file with your application. (The authors have to be congratulated for breaking with the current trend and using TOML rather than YAML!)

There are a few extra niceties in that you can define additional plugins (arbitrary commands that share the Draft meta-data via environment variables) and you can define tasks for a project that execute pre-up, post-deploy, and on cleanup. If, like me, you left wondering where these are documented, check out the Draft Enhancement Proposals where they were introduced.

All-in-all, there is nothing here that couldn’t be achieved here by scripting together a few standard commands but just because it’s simple, doesn’t mean that it isn’t useful. It’s one of many projects that are attempting to reduce developer friction when deploying to Kubernetes, and you can expect a few more posts covering others…

Jenkins X

Wednesday, September 19th, 2018

Having started to get some rhythm back in to the publishing of my personal blog posts, I thought it was about time that I started posting some technical content again too. I’m having to do lots of new learning at the moment and, if nothing else, writing about it makes sure that I’ve understood it and helps me remember at least some of what I’ve learnt. As before, these posts in no way indicate the position of my employer nor, in general, should you read in to them anything about technical direction. On the whole, they are just about topics that I’ve found sufficiently interesting to write a little about. There is, I have found, no knowing what will be of interest to other people (my all-time top post relates to Remote Desktop!). From an entirely selfish perspective, I don’t care if anyone reads what I write as it’s largely the writing that gives me value!

Having said all the above, the subject of this post is Jenkins X which is very much in the domain of my new employer! When it was announced back in March, I have to admit that I was somewhat sceptical. It was clearly aiming at much the same space that we were with the dev-ops part of Microclimate. My view wasn’t helped by the fact that I couldn’t actually get it to run. It did (and still does) run best out on public cloud but I used up my free quotas on AWS and GCP a long time ago. I tried to run it on minikube and failed. It was also developed by the team behind Fabric8 which, although it showed lots of promise, was never incorporated in to any of RedHat’s commercial offerings. The same was not set to be true of Jenkins X and, six months later, my new employer has just announced that it now forms part of the CloudBees Core offering under the name of Kube CD. I’ll save details of that commercial offering for another post and restrict myself to talking about the open source Jenkins X project here.

So what exactly is Jenkins X? It enables Continuous Integration and Continuous Deployment, of applications on Kubernetes. It happens to use Jenkins as the engine to perform those actions but, at least at a first pass, that is immaterial. Around that Jenkins is wrapped lots of Kubernetes-native goodness and, most importantly, a CLI by the name of jx. Thankfully this time around the minikube experience worked for me just fine and getting up and running was as simple as:

I have to say that I’m not a big fan of ‘verb followed by noun’ when it comes to CLI arguments as, although perhaps more readable, it makes end-user discovery harder (jx create tells me about a whole long list of largely unrelated things) but thankfully just typing jx gives a reasonable overview of the main options. Beware though that the CLI is heavily overloaded: it’s not only used for initialisation, but also subsequent actions performed by the developer, and those performed by the pipeline.

Perhaps the quickest way to demonstrate the capabilities is to then use a quick start:

This allows you to select a technology stack (everything from Android to Vert.X via Rust, Rails and React!) which lays down a skeleton application on disk. It then uses Draft to add a Dockerfile and Helm chart(s). It doesn’t stop there though. It will then help you create a repo on GitHub, check your code in, set up a multi-branch pipeline on the Jenkins instance it provisioned, and set up the web hook to trigger Jenkins on subsequent updates. (Webhooks don’t tend to work too well unless your minikube is internet facing but, given a bit more time, polling does the job eventually.) The default pipeline (defined by a Jenkinsfile in your application repository) uses Skaffold to build the application Docker image and push to a registry. The Helm chart is published to the provided instance of ChartMuseum.

Jenkins X follows the GitOps model promulgated by Alexis Richardson and the team at Weaveworks. By default, it sets up two GitHub repositories that map to staging and production namespaces in the Kubernetes cluster. Additional environments can easily be defined via, you guessed it, jx create environment. These repositories make good use of ‘umbrella’ Helm charts to deploy specific versions of each of the application charts. By default, the master branch is automatically deployed to the staging environment but promotion to production is performed manually, for example:

There is also the concept of a preview environment. Typically created for reviewing a pull request, they can also be created manually via the CLI. These allow a specific version of the application to be accessed in a temporary namespace created just for that purpose. All of the Jenkins X configuration (environments, releases, …) are represented in the Kubernetes way: as Custom Resource Definitions.

There’s plenty more to say about Jenkins X but I’ll save that for another post on another day. Hopefully this has given you enough of a flavour to encourage you to download the CLI and give it a try for yourself.