Another Classic Weekend

November 4th, 2018

It was another two day’s of racing this weekend. On Saturday BAOC had an urban race around Winchester based at Peter Symonds college. Christine was resting her knee so it was just the children and myself competing. Their courses were confined to the college grounds with Duncan finishing 5th M12- and Emma 3rd W12- (although they were running the same course and Duncan actually beat Emma).

The navigation wasn’t particularly challenging with many long legs meaning there was lots of hard running to be done. With a late start, I knew what time I should be aiming for and things became increasingly frantic as I headed into the last five controls. Needless to say, I managed to waste time on the last two controls, but still managed to take first place. The time of 43 minutes looks more respectable in the context of the 10k I ran and not the 6.4k quoted for the course length!

Sunday brought the November Classic. We all started today although only because I’d entered Christine by mistake! There was light rain over Hampton Ridge whilst we were out (the picture above was taken later in the day). We met with mixed fortunes. Duncan had a good run, finishing second on M10A. Christine walked round a few controls before returning. Emma was out for over an hour without finding any of her controls. My legs didn’t feel too bad until the last part of the course. My downfall was repeatedly hunting for pits in the bracken which saw me finish in 5th place. Thankfully, no events planned for next weekend!

OMM White

November 4th, 2018

Last weekend it was the OMM in the Black Mountains, South Wales. Christine’s parents had offered to mind the children so Christine and I were running the Medium Score together. There was a biting wind but blue skies as we set off on Saturday morning. There was some early indecision but we soon settled down to a steady mountain marathon pace. As the morning went on, the skies started to look increasingly ominous and, as we cross one bit of particularly bleak hillside, the snow began and persisted for long enough to paint the mountainside white. We reached the campsite with around twenty minutes to spare – not long enough to have fitted anything else in.

It was a long night in the campsite, made more bearable by being able to chat to Christine brother and his wife in the tent next to us. Due to the cold, we both ‘slept’ in all of our clothes, including waterproofs. We were certainly glad to discover that, as third mixed pair, we qualified for the chasing start and had an hour less to spend in the campsite in the morning.

Although we removed a layer, we both kept our waterproofs on for the whole of the second day. Christine’s knee was giving her grief (a likely outcome even before we started the weekend) and, as a consequence, we were setting a pretty stately pace. We reined in our plans as we went round and, although we finished with another 25 minutes to spare, at the speed we were going it still wouldn’t have got us another checkpoint. We were 47th on the second day which brought us down from 13th to 28th over the two days. Still respectable but not what we would have hoped for had we both been fit and healthy. On the plus side, it did mean we could slip away before the prize giving and make it home in reasonable time!

If you watch the promotional video, you’ll catch a brief glimpse of us finishing on the first day around the 1:33 mark. Thanks to Christine’s dad who purchased the image above where we were reunited with the children at the finish. You can also find our routes from Day 1 and 2 on RouteGadget.

Oracle Code One: Continuous Delivery to Kubernetes with Jenkins and Helm

October 31st, 2018

Last week I was out in San Francisco at Oracle Code One (previously known as JavaOne). I had to wait until Thursday morning to give my session on “Continuous Delivery to Kubernetes with Jenkins and Helm”. This was the same title I presented in almost exactly the same spot back in February at IBM’s Index Conference but there were some significant differences in the content.

Continuous Delivery to Kubernetes with Jenkins and Helm from David Currie

The first half was much the same. As you can see from the material on SlideShare and GitHub, it covers deploying Jenkins on Kubernetes via Helm and then setting up a pipeline with the Kubernetes plugin to build and deploy an application, again, using Helm. This time, I’d built a custom Jenkins image with the default set of plugins used by the Helm chart pre-installed which improved start-up times in the demo.

I had previously mounted in the Docker socket to perform the build but removed that and used kaniko instead. This highlighted one annoyance with the current approach used by the Kubernetes plugin: it uses exec on long-running containers to execute a shell script with the commands defined in the pipeline. The default kaniko image is a scratch image containing just the executor binary – nothing there to keep it alive, nor a shell to execute the script. In his example, Carlos uses the kaniko:debug image which adds a busybox shell but that requires other hoops to be jumped through because the shell is not in the normal location. Instead, I built a kaniko image based on alpine.

The biggest difference from earlier in the year was, perhaps not unsurprisingly, the inclusion of Jenkins X. I hadn’t really left myself enough time to do it justice. Given the normal terrible conference wifi and the GitHub outage earlier in the week, I had recorded a demo showing initial project creation, promotion, and update. I’ve added a voiceover so you can watch it for yourself below (although you probably want to go full-screen unless you have very good eyesight!).

Introduce poetry to your Kube config with ksonnet

October 15th, 2018

Returning to the 101 ways to create Kubernetes configuration theme, next up is ksonnet from the folks at Heptio. (I have no doubt that there are 101 ways to create Kubernetes configuration but I’m afraid I don’t really intend to cover all of them on this blog!) ksonnet has a different take yet again from Helm and kustomize. In many ways, it is more powerful than either of them but that power comes at the cost of a fairly steep learning curve.

The name is derived from Jsonnet, a data templating language that came out of Google back in 2014. Jsonnet essentially extends JSON with a scripting syntax that supports the definition of programming constructs such as variables, functions, and objects. The ‘Aha!’ moment for me with ksonnet was in realizing that it could be used as a simple template structure in much the same way as Helm. You start with some Kubernetes configuration in JSON format (and yq is your friend if you need to convert from YAML to JSON first) and from there you can extract parameters. I say ‘it could’ because you’d typically only take this approach if you were actually converting existing configuration but realizing this helped me get beyond some of the slightly strange syntax you see in generated files.

As usual, Homebrew is your starting point: brew install ksonnet/tap/ks. ksonnet has an understanding of the different environments to which an application is deployed and, when you issue ks init myapp, it takes the cluster that your current kube config is pointing at as the default environment (although you can override this with --context).

ksonnet then has the concept of ‘prototypes’ which are templates for generating particular types of application component when supplied with suitable parameters. These are provided by ‘packages’ which, in turn, come from a ‘registry’ stored on GitHub. Stealing from the tutorial, we can generate code for a simple deployment and service with the deployed-service prototype giving the image name and service type as parameters e.g.

At this point, we can use ks show default to return the YAML that would be generated or ks show apply to actually apply it to the default environment. I highly recommend doing the tutorial first and not the web-based tour as it shows you that you can get a long way with ksonnet without actually editing, or even looking at, any of the generated files. For example, you can use ks env add to create another environment and then ks param set to override the values of parameters for a particular environment as you might with Helm or kustomize.

Of course, the real power comes when you drop into the code and make use of ksonnet features like parts and modules to enable greater reuse of configuration in your application. At that point though, you really should take the time to learn jsonnet properly!

British Schools Score Champions

October 13th, 2018

Today we were down at the British Schools Orienteering Association Score Champs, taking place at Moors Valley Country Park. It was a lovely day to be out in the forest (particularly compared with Friday’s weather). This was the first score event that Emma and Duncan have done on their own and the game plan was simply for them to head around the loop of white-standard controls, picking up a few others on their way. They both executed on this and, despite (or perhaps because) of being back well inside the 45-minute time limit, won their respective courses. (Duncan ran up as the event starts at Year 5.) Prizes were presented by Gillian Cross, author of the Demon Headmaster series, and member of the organising club. Results and more pictures can be seen on the British Schools Orienteering Association website. We won’t be travelling up to the non-score Champs in Blackburn later this year but nearby BADO are due to host the event in 2019…

kail: kubernetes tail

October 12th, 2018

A short post for today but it relates to a tool that every Kubernetes user should have in their toolbox: kail. Although most users probably know that kubectl logs will, by default, show the logs for all containers in a pod and that it has --tail and -f options, fewer probably know that is has a -l option to select pods based on label. Kail takes tailing Kubernetes logs to a whole new level.

For Homebrew users, it’s available via brew install boz/repo/kail. When executed without any arguments it tails logs for all containers in the cluster which is probably not what you want unless your cluster is very quiet! There are, however, flags to let you filter not just on pod, container, and label, but also namespace, deployment, replica set, ingress, service, or node. Flags of the same type are ORed together, different flags are ANDed. And that’s pretty much all there is to it but anyone who finds themselves watching the logs of any moderately complex application will wonder how they lived without it!

Kustomizing Kubernetes Konfiguration

October 11th, 2018

Finally, I get to write that blog post on kustomize! kustomize is yet another tool attempting to solve the problem of how to make Kubernetes configuration re-usable. Unlike, say, Helm, kustomize allows configuration to be overridden at consumption time without necessarily having allowed for it when the configuration was originally produced. This is great if you are attempting to re-use someone else’s configuration. On the flip-side, you might prefer to use something like Helm if you actually want to limit the points of variability e.g. to ensure standardization across environments or applications.

You know the drill by now: the go binary CLI can be obtained via brew install kustomize. There is one main command and that is kustomize build. That expects to be pointed at a directory or URL containing a kustomization.yaml file. Running the command outputs the required Kubernetes resources to standard output where they can then be piped to kubectl if desired.

The kustomization.yaml can contain the following directives:

  • namespace – to add a namespace to all the output resources
  • namePrefix – to add a prefix to all the resource names
  • commonLabels – to add a set of labels to all resources (and selectors)
  • commonAnnotations – to add a set of annotations to all resources
  • resources – an explicit list of YAML files to be customized
  • configMapGenerator – to construct ConfigMaps on the fly
  • secretGenerator – to construct Secrets via arbitrary commands
  • patches – YAML files containing partial resource definitions to be overlayed on resources with matching names
  • patchesJson6902 – applies a JSON patch that can add or remove values
  • crds – lists YAML files defining CRDs (so that, if their names are updated, resources using them are also updated)
  • vars – used to define variables that reference resource/files for replacement in places that kustomize doesn’t handle automatically
  • imageTags – updates the tag for images matching a given name

That’s a pretty comprehensive toolbox for manipulating configuration. The only directive I didn’t mention was bases with which you can build a hierarchy of customizations. The prototypical example given is of a base configuration with different customizations for each deployment environment. Note that you can have multiple bases, so aws-east-staging might extend both aws-east and staging.

One of the refreshing things about kustomize is that it explicitly calls out a set of features that it doesn’t intend to implement. This introduces the only other command that the CLI supports: kustomize edit. Given that one of the stated restrictions is that kustomize does not provide any mechanism for parameterising individual builds, the intent of this command is to allow you to script modifications to your kustomization.yaml prior to calling build.

It’s worth noting that kustomize can be used in combination with Helm. For example, you could run helm template and then use kustomize to make additional modifications that are not supported by the original chart. You can also use them in the reverse order. The Helmfile docs describe how to use Helmfile’s hooks to drive a script that will use kustomize to construct the required YAML, but then wrap it in a shell chart so that you get the benefit of Helm’s releases.

Birthday Hash

October 8th, 2018

Lest this blog become entirely about technology: it was Duncan’s 9th birthday last week. His party took place the weekend before and, given it was up at Farley Mount and all outdoors, the weather was very kind to us. Emma and I laid out a hash whilst Christine and Duncan waited for ‘the boys’ to arrive. We returned to find a game of capture the flag (a Cub favourite) well underway.

Having laid the hash was a good excuse to stay behind and mind the lunch whilst everyone else disappeared off into the woods. Thankfully they all returned again about half an hour later although I could still hear them for about half that time! In the meantime, I’d started frying for the bacon sarnies. I was cooking on gas but the plan was to light a fire in the site’s barbecue grill so that they could toast marshmallows for s’mores. Christine had even bought a flint and steel and, with copious quantities of cotton wool, they did eventually get a fire going. This was quite something given the trouble I then had just trying to light the candles on the cake (with a match I hasten to add). Christine had lots of other activities planned but they seemed happy to round things off with another game of capture the flag.

When the day itself came it was fairly uneventful. Emma gets lots of enjoyment from just watching other people open presents and, given she leaves for school before Duncan gets out of bed these days, he was kind enough to wait until they’d both got back from school. Duncan had a week off football so that things weren’t quite as manic as usual but, after a birthday tea (where, as you can see, he only got the leftovers of his cake from the party!) he and Christine then headed out to Cubs as usual.