Archive for the ‘Kubernetes’ Category

Continuous Development with Skaffold

Thursday, October 4th, 2018

Next on the list of projects utilised by Jenkins X (and this is a theme that could run and run) is Google’s Skaffold. There is an intersection between the capabilities of Skaffold and Draft. Skaffold does not provide anything like packs (this is where Jenkins X uses Draft) so you need to provide your own Dockerfile and Kubernetes configuration (raw YAML, kustomize templates, and Helm charts are supported). Both projects, however, aim to simplify the process of building an image and deploying it to a Kubernetes cluster to speed iterative development of applications that have hard dependencies on a Kubernetes environment, or the services running therein.

In the Skaffold case, a skaffold run will do a one-time build and deploy and a skaffold dev will continuously monitor the filesystem to determine when to rebuild and update. As previously discussed, the value of being able to do this versus needing a smart incremental deployment from an IDE is very much dependent on how quick that rebuild process is for your application language/runtime of choice. As with Draft, it has the ability to allow you to skip pushing to a registry when you’re working with a local cluster.

So what does Skaffold offer that Draft doesn’t? Principally, that it is not designed to be used solely at development time. The idea is that the same skaffold run may also be used as part of your continuous deployment pipeline. If you’re a GCP user, this extends to capabilities like using Google Cloud Builder or Kaniko rather than a simple Docker build and, of course, interaction with a registry.

As this annotated skaffold.yaml shows, it has a few other neat tricks. You have lots of flexibility over the tagging scheme used for images: SHA, git commit ID, timestamp, or a Golang template. For a Docker build, you can specify build arguments and cache images. You can even configure a bazel build instead of a Docker build.

One thing to watch is that you adhere to the convention used for substituting the names of the built images. When using Helm, for example, the default behaviour is to pass the combination of image repository/name and tag as a single value. If you chart is using the default Helm convention of separate .repository and .tag values then you need to specify a different imageStrategy. If your chart expects a three-way split between repository, name, and tag, then you’re on your own!

Developing in-cluster with ksync

Sunday, September 30th, 2018

Continuing on the theme of technologies used by Jenkins X under the covers, this post is going to take a look at ksync. In Jenkins X, this project is used to implement the ‘DevPods‘ capability. It’s a pretty thin veneer over the top of what is provided by ksync though. As the name suggests, the project performs synchronisation, in particular, bi-directional synchronisation between a directory on your laptop and a pod in a Kubernetes cluster. The aim is to allow you to code locally in your favourite editor but run that code in-cluster where you have access to a full Kubernetes environment, including any other services you might be running.

The workflow is simple: a <code>ksync init sets up your local environment and deploys a daemon set in the cluster to get access to the file system of containers. Next, a ksync watch runs a process locally to monitor your laptop. The last step is to issue a ksync create to form a mapping between a local directory and a directory in a container (or potentially multiple containers that match the selector). And that’s all there is to it: make a change locally and it’s reflected in the container; see files written by the container appear on your local disk.

ksync is, in turn, using syncthing to perform the actual file synchronisation. In addition, it provides the option to restart the application container following a sync (for example, if the application process is not dynamically reloading modified files).

In practise, it works well and certainly supports a much more iterative development experience than you would get if you relied upon a Docker build from something like <code>draft up</code>, even assuming you had optimised your Docker build so it just needed to copy in the application files. As when running an application locally, all of this works best for interpreted languages like PHP, Python or Node. For a language like Java, you need to get the compiled application (or classes) in to the synced directory and that is unlikely to give you the same experience that you can get using an IDE capable of hot-reloading. That’s something that Microclimate looked to address…

 

draft up

Friday, September 21st, 2018

For my next few posts, I thought I’d pick on some of the technologies that Jenkins X uses under the covers. The first of these is Draft, originally from Deis but it went with them to Microsoft. Draft’s aim is to streamline the process of developing code that runs on Kubernetes. It’s evolved a bit since it was originally released: having started with a client-server architecture, it is now entirely client based. There are many good reasons for this although one of the things that differentiated Draft originally was that it didn’t need anything on the developer’s machine other than Draft itself: not even Docker.

The part of Draft that Jenkins X uses is the ability to add a Dockerfile and Helm chart to an existing project. The combination of Dockerfile and Helm chart is stored in what Draft calls a ‘pack’. On running a draft create, Draft does some nifty analysis to detect the language being used in the project in order to select the appropriate pack to use. As you and I know though, language alone is not going to tell me whether I’m, say, running an executable JAR or providing a WAR file to run on an app server. Fortunately, there’s a --pack option so I can tell Draft which pack to use. The pack mechanism is nicely extensible with the ability to specify new repositories (simply a Git repo containing a packs folder). The packs used by Jenkins X (which include one for Liberty even if it isn’t very good) can be found here. Draft is also clever enough to know that, if I already have a Dockerfile or chart, I probably don’t want the one from the pack instead.

Once I have a Dockerfile and chart, the next step is to deploy my application using draft up. Draft’s expecting to find the Kubernetes context and Helm already set-up. It’ll build the Docker image and if a registry is configured push the image there. The latter isn’t compulsory though so if I’m using Docker Desktop (the new name for Docker for Mac/Windows) or have my Docker client pointing at my Minikube Docker daemon then I can just use the image out of the cache. It will then use Helm to deploy the application, passing through overrides for image.repository and image.tag to reference the image that’s just been built (using a unique tag). It will even set up an imagePullSecret if necessary. You can use draft logs to see the output from the build and deploy.

Originally, Draft came with a ‘watch’ mode where it would attempt to detect file updates and automatically rebuild. Thankfully that now seems to have been dropped as, with a completely unoptimised build cycle, it really wasn’t practical. The Java pack is particularly bad as the provided Dockerfile doesn’t even attempt to cache the Maven dependencies. Now you simply run draft up again to trigger a rebuild (which you could hook up to your editor’s save option if you really wished).

The last part of the Draft developer experience is draft connect which pipes the logs from any deployed containers to your terminal, along with setting up port forwarding. Sensibly, it allows you configure the local ports that you want to forward to and this, along with other configuration, can be stored in a draft.toml file with your application. (The authors have to be congratulated for breaking with the current trend and using TOML rather than YAML!)

There are a few extra niceties in that you can define additional plugins (arbitrary commands that share the Draft meta-data via environment variables) and you can define tasks for a project that execute pre-up, post-deploy, and on cleanup. If, like me, you left wondering where these are documented, check out the Draft Enhancement Proposals where they were introduced.

All-in-all, there is nothing here that couldn’t be achieved here by scripting together a few standard commands but just because it’s simple, doesn’t mean that it isn’t useful. It’s one of many projects that are attempting to reduce developer friction when deploying to Kubernetes, and you can expect a few more posts covering others…

Jenkins X

Wednesday, September 19th, 2018

Having started to get some rhythm back in to the publishing of my personal blog posts, I thought it was about time that I started posting some technical content again too. I’m having to do lots of new learning at the moment and, if nothing else, writing about it makes sure that I’ve understood it and helps me remember at least some of what I’ve learnt. As before, these posts in no way indicate the position of my employer nor, in general, should you read in to them anything about technical direction. On the whole, they are just about topics that I’ve found sufficiently interesting to write a little about. There is, I have found, no knowing what will be of interest to other people (my all-time top post relates to Remote Desktop!). From an entirely selfish perspective, I don’t care if anyone reads what I write as it’s largely the writing that gives me value!

Having said all the above, the subject of this post is Jenkins X which is very much in the domain of my new employer! When it was announced back in March, I have to admit that I was somewhat sceptical. It was clearly aiming at much the same space that we were with the dev-ops part of Microclimate. My view wasn’t helped by the fact that I couldn’t actually get it to run. It did (and still does) run best out on public cloud but I used up my free quotas on AWS and GCP a long time ago. I tried to run it on minikube and failed. It was also developed by the team behind Fabric8 which, although it showed lots of promise, was never incorporated in to any of RedHat’s commercial offerings. The same was not set to be true of Jenkins X and, six months later, my new employer has just announced that it now forms part of the CloudBees Core offering under the name of Kube CD. I’ll save details of that commercial offering for another post and restrict myself to talking about the open source Jenkins X project here.

So what exactly is Jenkins X? It enables Continuous Integration and Continuous Deployment, of applications on Kubernetes. It happens to use Jenkins as the engine to perform those actions but, at least at a first pass, that is immaterial. Around that Jenkins is wrapped lots of Kubernetes-native goodness and, most importantly, a CLI by the name of jx. Thankfully this time around the minikube experience worked for me just fine and getting up and running was as simple as:

I have to say that I’m not a big fan of ‘verb followed by noun’ when it comes to CLI arguments as, although perhaps more readable, it makes end-user discovery harder (jx create tells me about a whole long list of largely unrelated things) but thankfully just typing jx gives a reasonable overview of the main options. Beware though that the CLI is heavily overloaded: it’s not only used for initialisation, but also subsequent actions performed by the developer, and those performed by the pipeline.

Perhaps the quickest way to demonstrate the capabilities is to then use a quick start:

This allows you to select a technology stack (everything from Android to Vert.X via Rust, Rails and React!) which lays down a skeleton application on disk. It then uses Draft to add a Dockerfile and Helm chart(s). It doesn’t stop there though. It will then help you create a repo on GitHub, check your code in, set up a multi-branch pipeline on the Jenkins instance it provisioned, and set up the web hook to trigger Jenkins on subsequent updates. (Webhooks don’t tend to work too well unless your minikube is internet facing but, given a bit more time, polling does the job eventually.) The default pipeline (defined by a Jenkinsfile in your application repository) uses Skaffold to build the application Docker image and push to a registry. The Helm chart is published to the provided instance of ChartMuseum.

Jenkins X follows the GitOps model promulgated by Alexis Richardson and the team at Weaveworks. By default, it sets up two GitHub repositories that map to staging and production namespaces in the Kubernetes cluster. Additional environments can easily be defined via, you guessed it, jx create environment. These repositories make good use of ‘umbrella’ Helm charts to deploy specific versions of each of the application charts. By default, the master branch is automatically deployed to the staging environment but promotion to production is performed manually, for example:

There is also the concept of a preview environment. Typically created for reviewing a pull request, they can also be created manually via the CLI. These allow a specific version of the application to be accessed in a temporary namespace created just for that purpose. All of the Jenkins X configuration (environments, releases, …) are represented in the Kubernetes way: as Custom Resource Definitions.

There’s plenty more to say about Jenkins X but I’ll save that for another post on another day. Hopefully this has given you enough of a flavour to encourage you to download the CLI and give it a try for yourself.

Index Developer Conference

Sunday, February 25th, 2018

IBM launched a new conference in San Francisco under the name Index and I was lucky enough to attend. This wasn’t your usual IBM conference focused on brands and products. Although the tracks were aligned with IBM’s strategic areas (Cloud, Blockchain and AI talks were much in evidence, for example) it really was a developer conference with keynotes and speakers from well-renowned figures across the industry.

You can watch my session covering deploying Jenkins on Kubernetes with Helm and deploying to Kubernetes from Jenkins with Helm below. You can find the deck on SlideShare and the demo material on GitHub. For those who know what I work on, it will be no surprise that this is based on our discoveries when developing Microservice Builder. I highly recommend you also check out some of the other sessions on the conference playlist and watch out for Index 2019!

https://youtu.be/xzbMHj1ly9c

The timing of the conference meant I had Friday to be a tourist with some colleagues. We headed over to SF MoMA and then made the most of the sunshine with a stroll along the waterfront to see the sea lions and then to have to have lunch overlooking the bay.

Kubernetes arrives in Docker for Mac

Monday, January 8th, 2018

My focus for the last 18 months having been on deployment to Kubernetes, I was excited to hear the news back at DockerCon that Docker Inc were recognising the dominance of Kubernetes. This included adding support to Docker Enterprise Edition (alongside Swarm) and to Docker for Mac/Windows. The latter has now hit beta in the edge channel of Docker for Mac and the following are my first impressions.

Having not had any particular need for the latest and greatest Docker for some time, my first step was to switch from the stable channel back to edge. That’s a pretty painless process. You do lose any of your current containers/images but you’ve got the ones you care about stored away in a registry somewhere haven’t you?! Then open up the preferences, switch to the shiny new Kubernetes tab, check the box to Enable Kubernetes and hit Apply followed by Install. As promised, it took a couple of minutes for the cluster to be created.

The UI leaves you a bit in the dark at this point but thankfully the email that arrived touting the new capability gave you a pointer as to where to go next: the install creates a kubectl context called docker-for-desktop. With this information I could access my new cluster from the command line:

Now to take the cluster for a quick spin. Let’s deploy Open Liberty via the Helm chart:

And, due to the magic of Docker for Mac networking, after a short wait we are treated to the exposed NodePort running on localhost:

Open Liberty on Kubernetes on Docker for Mac

Undoubtedly there will be issues but, at least at first glance, this support would seem to go a long way to answering those who see minikube as an inhibitor to making Kubernetes a part of the developer’s workflow.

Optional Kubernetes resources and PodPresets

Thursday, July 27th, 2017

The sample for Microservice Builder is intended to run on top of the Microservice Builder fabric and also to utilize the ELK sample. As such, the Kubernetes configuration for the microservice pods all bind to a set of resources (secrets and config-maps) created by the Helm charts for the fabric and ELK sample. The slightly annoying thing is that the sample would work perfectly well without these (you just wouldn’t get any logging to the ELK stack) only, as we shall see in a moment, deployment fails if the fabric and ELK sample have not already been deployed. In this post we’ll explore a few possibilities as to how these resources could be made optional.

I’m going to assume a minikube environment here and we’re going to try to deploy just one of the microservices as follows:

If you then perform a kubectl describe  for the pod that is created you’ll see that it fails to start as it can’t bind the volume mounts:

Elsewhere in the output though you’ll see a clue to our first plan of attack:

Doesn’t that optional  flag look promising?! As of Kubernetes 1.7 (and thanks to my one-time colleague Michael Fraenkel) we can mark our usage of secrets and config-maps as optional. Our revised pod spec would now look as follows:

And lo and behold, with that liberal sprinkling of optional attributes we can now successfully deploy the service without either the fabric or ELK sample. Success! But why stop there? All of this is boilerplate that is repeated across all our microservices. Wouldn’t it be better if it simply wasn’t there in the pod spec and we just added it when it was needed? Another new resource type in Kubernetes 1.7 comes to our rescue: the PodPreset. A pod preset allows us to inject just this kind of configuration at deployment time to pods that match a given selector.

We can now slim our deployment down to the bare minimum that we want to have in our basic environment:

Note that we have also added that runtime: liberty  label to the pod which is what we’re going to use to match on. In our advanced environment, we don’t want to be adding the resources to every pod in the environment, in particular,  we don’t want to add it to those that aren’t even running Liberty. This slimmed down deployment works just fine, in the same way that the optional version did.

Now, what do we have to do to get all of that configuration back in an environment where we do have the fabric and ELK sample deployed? Well, we define it in a pod preset as follows:

Note that the selector is matching on the label that we defined in the pod spec earlier. Now, pod presets are currently applied by something in Kubernetes called admission control and, because they are still alpha, minikube doesn’t enable the admission controller for PodPresets by default. We can enable it as follows:

(Note that, prior to minikube v0.21.0 this property was called apiserver.GenericServerRunOptions.AdmissionControl, a change that cost me half an hour of my life I’ll never get back!)

With the fabric, ELK sample and pod preset deployed, we now find that our pod regains its volume mounts when deployed courtesy of the admission controller:

Pod presets are tailor-made for this sort of scenario where we want to inject secrets and config maps but even they don’t go far enough for something like Istio where we want to inject a whole new container into the pod (the Envoy proxy) at deployment time. Admission controllers in general also have their limitations in that they have to be compiled into the API server and, as we’ve seen, they have to be specified when the API server starts up. If you need something a whole lot more dynamic that take a look at the newly introduced initializers.

One last option for those who aren’t yet on Kubernetes 1.7. We’re in the process of moving our generated microservices to use Helm and in a Helm chart template you can make configuration optional. For example, we might define a logging  option in our values.yaml with a default value of disabled , and then we can define constructs along the following lines in our pod spec:

Then all we’ve got to do when we’re deploying to our environment with the fabric and ELK sample in place is to specify an extra --set logging=enabled  on our helm install. Unlike the pod preset, this does mean that the logic is repeated in the Helm chart for every microservice but it certainly wins on the portability stakes.

Microservice Builder GA Update

Wednesday, July 12th, 2017

As I posted here on the Microservice Builder beta, I thought it only fair that I should offer an update now that it is Generally Available. There is already the official announcement, various coverage in the press including ZDNet and ADT, a post from my new General Manager Denis Kennelly, and, indeed, my own post on the official blog, so I thought I’d focus on what has changed from a technical standpoint since the beta.

If I start with the developer CLI, the most significant change here is that you no longer need a Bluemix login. Indeed, if you aren’t logged in, you’ll no longer be prompted for potentially irrelevant information such as the sub-domain on Bluemix where you want the application to run. Note, however, that the CLI is still using back-end services out in the cloud to generate the projects so you’ll still need internet connectivity when performing a bx dev create.

Moving on to the next part of the end-to-end flow: the Jenkins-based CI/CD pipeline, the Helm chart for this has been modified extensively. It is now based on the community chart which, most significantly, means that it is using the Kubernetes plugin for Jenkins. This results in the use of separate containers for each of the build steps (with Maven for the app build, Docker for the image build, and kubectl for the deploy) and those containers are spun up dynamically as part of a Kubernetes pod representing the Jenkins slave when required.

The Jenkinsfile has also been refactored to make extensive use of a Jenkins library. As you’ll see in the sample projects, this means that the generated Jenkinsfile is now very sparse:

I could say much more about the work we’ve done with the pipeline but to do so would be stealing the thunder from one of my colleagues who I know is penning an article on this subject.

Looking at the runtime portion, what we deploy for the Microservice Builder fabric has changed significantly. We had a fair amount of heartache as we considered security concerns in the inter-component communication. This led us to move the ELK stack and configuration for the Liberty logstash feature out into a sample. This capability will return although likely in a slightly different form. The fabric did gain a Zipkin server for collation and display of Open Tracing data. Again, the security concerns hit home here and, for now, the server is not persisting data and the dashboard is only accessible via kubectl port-forward.

Another significant change, and one of the reasons I didn’t post this immediately, was that a week after we GA’d, IBM Spectrum Conductor for Containers morphed into IBM Cloud private. In the 1.2 release, this is largely a rebranding exercise but there’s certainly a lot more to come in this space. Most immediately for Microservice Builder, it means that you no longer need to add our Helm repository as it will be there in the App Center out of the box. It also meant a lot of search and replace for me in our Knowledge Center!

You may be wondering where we are heading next with Microservice Builder. As always, unfortunately I can’t disclose future product plans. What I can do is highlight existing activity that is happening externally. For example, if you look at the Google Group for the MicroProfile community, you will see activity ramping up there and proposals for a number of new components. Several of the Microservice Builder announcements also refer to the Istio service mesh project on which IBM is collaborating with Google. It’s still early days there but the project is moving fast and you can take a look at some of the exciting features on the roadmap.