Archive for the ‘Technology’ Category

Developing in-cluster with ksync

Sunday, September 30th, 2018

Continuing on the theme of technologies used by Jenkins X under the covers, this post is going to take a look at ksync. In Jenkins X, this project is used to implement the ‘DevPods‘ capability. It’s a pretty thin veneer over the top of what is provided by ksync though. As the name suggests, the project performs synchronisation, in particular, bi-directional synchronisation between a directory on your laptop and a pod in a Kubernetes cluster. The aim is to allow you to code locally in your favourite editor but run that code in-cluster where you have access to a full Kubernetes environment, including any other services you might be running.

The workflow is simple: a <code>ksync init sets up your local environment and deploys a daemon set in the cluster to get access to the file system of containers. Next, a ksync watch runs a process locally to monitor your laptop. The last step is to issue a ksync create to form a mapping between a local directory and a directory in a container (or potentially multiple containers that match the selector). And that’s all there is to it: make a change locally and it’s reflected in the container; see files written by the container appear on your local disk.

ksync is, in turn, using syncthing to perform the actual file synchronisation. In addition, it provides the option to restart the application container following a sync (for example, if the application process is not dynamically reloading modified files).

In practise, it works well and certainly supports a much more iterative development experience than you would get if you relied upon a Docker build from something like <code>draft up</code>, even assuming you had optimised your Docker build so it just needed to copy in the application files. As when running an application locally, all of this works best for interpreted languages like PHP, Python or Node. For a language like Java, you need to get the compiled application (or classes) in to the synced directory and that is unlikely to give you the same experience that you can get using an IDE capable of hot-reloading. That’s something that Microclimate looked to address…

 

draft up

Friday, September 21st, 2018

For my next few posts, I thought I’d pick on some of the technologies that Jenkins X uses under the covers. The first of these is Draft, originally from Deis but it went with them to Microsoft. Draft’s aim is to streamline the process of developing code that runs on Kubernetes. It’s evolved a bit since it was originally released: having started with a client-server architecture, it is now entirely client based. There are many good reasons for this although one of the things that differentiated Draft originally was that it didn’t need anything on the developer’s machine other than Draft itself: not even Docker.

The part of Draft that Jenkins X uses is the ability to add a Dockerfile and Helm chart to an existing project. The combination of Dockerfile and Helm chart is stored in what Draft calls a ‘pack’. On running a draft create, Draft does some nifty analysis to detect the language being used in the project in order to select the appropriate pack to use. As you and I know though, language alone is not going to tell me whether I’m, say, running an executable JAR or providing a WAR file to run on an app server. Fortunately, there’s a --pack option so I can tell Draft which pack to use. The pack mechanism is nicely extensible with the ability to specify new repositories (simply a Git repo containing a packs folder). The packs used by Jenkins X (which include one for Liberty even if it isn’t very good) can be found here. Draft is also clever enough to know that, if I already have a Dockerfile or chart, I probably don’t want the one from the pack instead.

Once I have a Dockerfile and chart, the next step is to deploy my application using draft up. Draft’s expecting to find the Kubernetes context and Helm already set-up. It’ll build the Docker image and if a registry is configured push the image there. The latter isn’t compulsory though so if I’m using Docker Desktop (the new name for Docker for Mac/Windows) or have my Docker client pointing at my Minikube Docker daemon then I can just use the image out of the cache. It will then use Helm to deploy the application, passing through overrides for image.repository and image.tag to reference the image that’s just been built (using a unique tag). It will even set up an imagePullSecret if necessary. You can use draft logs to see the output from the build and deploy.

Originally, Draft came with a ‘watch’ mode where it would attempt to detect file updates and automatically rebuild. Thankfully that now seems to have been dropped as, with a completely unoptimised build cycle, it really wasn’t practical. The Java pack is particularly bad as the provided Dockerfile doesn’t even attempt to cache the Maven dependencies. Now you simply run draft up again to trigger a rebuild (which you could hook up to your editor’s save option if you really wished).

The last part of the Draft developer experience is draft connect which pipes the logs from any deployed containers to your terminal, along with setting up port forwarding. Sensibly, it allows you configure the local ports that you want to forward to and this, along with other configuration, can be stored in a draft.toml file with your application. (The authors have to be congratulated for breaking with the current trend and using TOML rather than YAML!)

There are a few extra niceties in that you can define additional plugins (arbitrary commands that share the Draft meta-data via environment variables) and you can define tasks for a project that execute pre-up, post-deploy, and on cleanup. If, like me, you left wondering where these are documented, check out the Draft Enhancement Proposals where they were introduced.

All-in-all, there is nothing here that couldn’t be achieved here by scripting together a few standard commands but just because it’s simple, doesn’t mean that it isn’t useful. It’s one of many projects that are attempting to reduce developer friction when deploying to Kubernetes, and you can expect a few more posts covering others…

Jenkins X

Wednesday, September 19th, 2018

Having started to get some rhythm back in to the publishing of my personal blog posts, I thought it was about time that I started posting some technical content again too. I’m having to do lots of new learning at the moment and, if nothing else, writing about it makes sure that I’ve understood it and helps me remember at least some of what I’ve learnt. As before, these posts in no way indicate the position of my employer nor, in general, should you read in to them anything about technical direction. On the whole, they are just about topics that I’ve found sufficiently interesting to write a little about. There is, I have found, no knowing what will be of interest to other people (my all-time top post relates to Remote Desktop!). From an entirely selfish perspective, I don’t care if anyone reads what I write as it’s largely the writing that gives me value!

Having said all the above, the subject of this post is Jenkins X which is very much in the domain of my new employer! When it was announced back in March, I have to admit that I was somewhat sceptical. It was clearly aiming at much the same space that we were with the dev-ops part of Microclimate. My view wasn’t helped by the fact that I couldn’t actually get it to run. It did (and still does) run best out on public cloud but I used up my free quotas on AWS and GCP a long time ago. I tried to run it on minikube and failed. It was also developed by the team behind Fabric8 which, although it showed lots of promise, was never incorporated in to any of RedHat’s commercial offerings. The same was not set to be true of Jenkins X and, six months later, my new employer has just announced that it now forms part of the CloudBees Core offering under the name of Kube CD. I’ll save details of that commercial offering for another post and restrict myself to talking about the open source Jenkins X project here.

So what exactly is Jenkins X? It enables Continuous Integration and Continuous Deployment, of applications on Kubernetes. It happens to use Jenkins as the engine to perform those actions but, at least at a first pass, that is immaterial. Around that Jenkins is wrapped lots of Kubernetes-native goodness and, most importantly, a CLI by the name of jx. Thankfully this time around the minikube experience worked for me just fine and getting up and running was as simple as:

I have to say that I’m not a big fan of ‘verb followed by noun’ when it comes to CLI arguments as, although perhaps more readable, it makes end-user discovery harder (jx create tells me about a whole long list of largely unrelated things) but thankfully just typing jx gives a reasonable overview of the main options. Beware though that the CLI is heavily overloaded: it’s not only used for initialisation, but also subsequent actions performed by the developer, and those performed by the pipeline.

Perhaps the quickest way to demonstrate the capabilities is to then use a quick start:

This allows you to select a technology stack (everything from Android to Vert.X via Rust, Rails and React!) which lays down a skeleton application on disk. It then uses Draft to add a Dockerfile and Helm chart(s). It doesn’t stop there though. It will then help you create a repo on GitHub, check your code in, set up a multi-branch pipeline on the Jenkins instance it provisioned, and set up the web hook to trigger Jenkins on subsequent updates. (Webhooks don’t tend to work too well unless your minikube is internet facing but, given a bit more time, polling does the job eventually.) The default pipeline (defined by a Jenkinsfile in your application repository) uses Skaffold to build the application Docker image and push to a registry. The Helm chart is published to the provided instance of ChartMuseum.

Jenkins X follows the GitOps model promulgated by Alexis Richardson and the team at Weaveworks. By default, it sets up two GitHub repositories that map to staging and production namespaces in the Kubernetes cluster. Additional environments can easily be defined via, you guessed it, jx create environment. These repositories make good use of ‘umbrella’ Helm charts to deploy specific versions of each of the application charts. By default, the master branch is automatically deployed to the staging environment but promotion to production is performed manually, for example:

There is also the concept of a preview environment. Typically created for reviewing a pull request, they can also be created manually via the CLI. These allow a specific version of the application to be accessed in a temporary namespace created just for that purpose. All of the Jenkins X configuration (environments, releases, …) are represented in the Kubernetes way: as Custom Resource Definitions.

There’s plenty more to say about Jenkins X but I’ll save that for another post on another day. Hopefully this has given you enough of a flavour to encourage you to download the CLI and give it a try for yourself.

Scratching at Work

Saturday, May 12th, 2018

Not satisfied with a four-day Bank Holiday week, I was back in work today for a Scratch Day organised by the inimitable Dale Lane, supported by an all-star cast of IBMers, past and present. The day got off to an ‘exciting’ start with Duncan and I cycling there along Hursley Road. Emma joined us by car, just as the day got going, hot foot from her swimming lesson.

There was a good turnout from IBM and other local families. On offer was a selection of projects from Code Club and Dale’s own Machine Learning for Kids. Emma and Duncan worked separately and I probably spent most of my time helping Duncan (although both are familiar with Scratch from school and home). Typically, Duncan picked two of the ‘advanced’ options but, having heard Dale talk about them at a lunchtime session, I was more than happy to try out a couple of the ML exercises.

We started with Judge a book which performs image classification on book covers to try and identify genre. I was a bit slow to realise that Duncan was logged in to my Amazon account whilst performing his searches but thankfully we switched to an incognito session before getting to the flesh-covered books under Romance! He’d picked Horror and Fantasy as two of his other genres and it wasn’t surprising that the classifier occasionally got those confused.

I had to help out a fair amount with the Headlines exercise as there was a lot of typing to enter the training set from different newspapers. We didn’t manage to finish before the end of the day but we still had an interesting discussion about the differences between tabloid and broadsheet headlines.

The event closed with an opportunity for the children to show what they had done to the others. Although some were a little reticent, this was a great opportunity for them to build a little confidence and soak up the applause that each invariably got.

All-in-all, we had a great day and my thanks go to all those that gave up a day (and more) to help out. We’ll certainly be checking out a few of the other projects and hope that Scratch Day makes a return to Hursley next year.

Index Developer Conference

Sunday, February 25th, 2018

IBM launched a new conference in San Francisco under the name Index and I was lucky enough to attend. This wasn’t your usual IBM conference focused on brands and products. Although the tracks were aligned with IBM’s strategic areas (Cloud, Blockchain and AI talks were much in evidence, for example) it really was a developer conference with keynotes and speakers from well-renowned figures across the industry.

You can watch my session covering deploying Jenkins on Kubernetes with Helm and deploying to Kubernetes from Jenkins with Helm below. You can find the deck on SlideShare and the demo material on GitHub. For those who know what I work on, it will be no surprise that this is based on our discoveries when developing Microservice Builder. I highly recommend you also check out some of the other sessions on the conference playlist and watch out for Index 2019!

https://youtu.be/xzbMHj1ly9c

The timing of the conference meant I had Friday to be a tourist with some colleagues. We headed over to SF MoMA and then made the most of the sunshine with a stroll along the waterfront to see the sea lions and then to have to have lunch overlooking the bay.

Half Term Action

Tuesday, February 20th, 2018

Although we had no particular plans, I had the whole of the February half term off work. We went over to Wales for the first few days. I had a lovely long run in the Forest of Dean on Sunday whilst the others went around the sculpture trail. Christine drew the short straw as she got to run back to Monmouth just as the Arctic conditions arrived.

The next day we had to shovel the snow off the driveway before heading over to Llangorse Activity Centre. Christine wanted to cement the skills she’d learnt on her rope handling course whilst her Dad was around. Sue and I went for a short walk up a snowy hill!

Unfortunately, the weather deteriorated again as we headed back to Southampton. Not surprisingly, therefore, we weren’t the only ones to have the idea of going to the Winchester Science Centre and it was Thursday before I could actually book a ticket. By this time we had blue skies but it made a nice change to actually be able to sit outside and eat our lunch. The children enjoyed the special ‘Secret World of Gases’ show even if only for the loud bangs. I was less sure about the ‘We Are Aliens!‘ film in the planetarium but you could always just lie back and close your eyes… The same was true of our rather belated trip to see Paddington 2 the following day!

Christine took up the reigns again at the weekend with a trip to Mottisfont. I only made it as far as the car park, running back home instead.

Kubernetes arrives in Docker for Mac

Monday, January 8th, 2018

My focus for the last 18 months having been on deployment to Kubernetes, I was excited to hear the news back at DockerCon that Docker Inc were recognising the dominance of Kubernetes. This included adding support to Docker Enterprise Edition (alongside Swarm) and to Docker for Mac/Windows. The latter has now hit beta in the edge channel of Docker for Mac and the following are my first impressions.

Having not had any particular need for the latest and greatest Docker for some time, my first step was to switch from the stable channel back to edge. That’s a pretty painless process. You do lose any of your current containers/images but you’ve got the ones you care about stored away in a registry somewhere haven’t you?! Then open up the preferences, switch to the shiny new Kubernetes tab, check the box to Enable Kubernetes and hit Apply followed by Install. As promised, it took a couple of minutes for the cluster to be created.

The UI leaves you a bit in the dark at this point but thankfully the email that arrived touting the new capability gave you a pointer as to where to go next: the install creates a kubectl context called docker-for-desktop. With this information I could access my new cluster from the command line:

Now to take the cluster for a quick spin. Let’s deploy Open Liberty via the Helm chart:

And, due to the magic of Docker for Mac networking, after a short wait we are treated to the exposed NodePort running on localhost:

Open Liberty on Kubernetes on Docker for Mac

Undoubtedly there will be issues but, at least at first glance, this support would seem to go a long way to answering those who see minikube as an inhibitor to making Kubernetes a part of the developer’s workflow.

Multi-Arch Docker Images

Sunday, October 8th, 2017

Things have been a little hectic recently and a whole month has slipped past without a blog post. The following is old news now but it’s still something that I wanted to call out. As of mid-September, all the official images on Docker Hub have been multi-arch enabled and many of them, including the websphere-liberty image, are now available for multiple platforms.

So what does that mean practically speaking? It means that, whether you are on x, p or z, hardware, you can now use the same docker run websphere-liberty command and have it pull the appropriate image for your architecture. That may seem like a trivial thing: what was so hard about docker run ppc64le/websphere-liberty after all? What it means though is that I can also use the same Dockerfiles, Compose files and Kubernetes configuration, regardless of what platform I’m on.

If you consider WebSphere Liberty, for example, we don’t have any architecture specific code but we are obviously dependent upon Java which has both native code and is dependent on platform specific libraries. That meant that, to build the ppc64le/websphere-liberty image, we had to change the Dockerfile to build from ppc64le/ibmjava. That’s no longer the case.

To take another example, with my Microservice Builder hat on we are building Helm charts that we want to run on multiple platforms. Previously we’d had to rely on multiple charts or overrides to select the correct image for the platform. That’s also no longer the case.

Now just to be 100% clear, this is not about having a single image that can run on multiple architectures: that would require a hypervisor or emulator. The magic here is purely that websphere-liberty is no longer an image, it is a manifest that points to the layers that make up the images for each architecture, and, when the image is pulled, that indirection is resolved by the client to select the image for the current platform.

If you want to know more of the technical detail and the journey to get this far, I suggest reading the blog posts by my colleagues Phil Estes and Utz Bacher. I also need to call out Tianon Gravi whose tireless work on the official images meant that enabling this support for WebSphere was entirely painless.