Archive for the ‘Kubernetes’ Category

Continuous Development with Skaffold

Thursday, October 4th, 2018

Next on the list of projects utilised by Jenkins X (and this is a theme that could run and run) is Google’s Skaffold. There is an intersection between the capabilities of Skaffold and Draft. Skaffold does not provide anything like packs (this is where Jenkins X uses Draft) so you need to provide your own Dockerfile and Kubernetes configuration (raw YAML, kustomize templates, and Helm charts are supported). Both projects, however, aim to simplify the process of building an image and deploying it to a Kubernetes cluster to speed iterative development of applications that have hard dependencies on a Kubernetes environment, or the services running therein.

In the Skaffold case, a skaffold run will do a one-time build and deploy and a skaffold dev will continuously monitor the filesystem to determine when to rebuild and update. As previously discussed, the value of being able to do this versus needing a smart incremental deployment from an IDE is very much dependent on how quick that rebuild process is for your application language/runtime of choice. As with Draft, it has the ability to allow you to skip pushing to a registry when you’re working with a local cluster.

So what does Skaffold offer that Draft doesn’t? Principally, that it is not designed to be used solely at development time. The idea is that the same skaffold run may also be used as part of your continuous deployment pipeline. If you’re a GCP user, this extends to capabilities like using Google Cloud Builder or Kaniko rather than a simple Docker build and, of course, interaction with a registry.

As this annotated skaffold.yaml shows, it has a few other neat tricks. You have lots of flexibility over the tagging scheme used for images: SHA, git commit ID, timestamp, or a Golang template. For a Docker build, you can specify build arguments and cache images. You can even configure a bazel build instead of a Docker build.

One thing to watch is that you adhere to the convention used for substituting the names of the built images. When using Helm, for example, the default behaviour is to pass the combination of image repository/name and tag as a single value. If you chart is using the default Helm convention of separate .repository and .tag values then you need to specify a different imageStrategy. If your chart expects a three-way split between repository, name, and tag, then you’re on your own!

Developing in-cluster with ksync

Sunday, September 30th, 2018

Continuing on the theme of technologies used by Jenkins X under the covers, this post is going to take a look at ksync. In Jenkins X, this project is used to implement the ‘DevPods‘ capability. It’s a pretty thin veneer over the top of what is provided by ksync though. As the name suggests, the project performs synchronisation, in particular, bi-directional synchronisation between a directory on your laptop and a pod in a Kubernetes cluster. The aim is to allow you to code locally in your favourite editor but run that code in-cluster where you have access to a full Kubernetes environment, including any other services you might be running.

The workflow is simple: a <code>ksync init sets up your local environment and deploys a daemon set in the cluster to get access to the file system of containers. Next, a ksync watch runs a process locally to monitor your laptop. The last step is to issue a ksync create to form a mapping between a local directory and a directory in a container (or potentially multiple containers that match the selector). And that’s all there is to it: make a change locally and it’s reflected in the container; see files written by the container appear on your local disk.

ksync is, in turn, using syncthing to perform the actual file synchronisation. In addition, it provides the option to restart the application container following a sync (for example, if the application process is not dynamically reloading modified files).

In practise, it works well and certainly supports a much more iterative development experience than you would get if you relied upon a Docker build from something like <code>draft up</code>, even assuming you had optimised your Docker build so it just needed to copy in the application files. As when running an application locally, all of this works best for interpreted languages like PHP, Python or Node. For a language like Java, you need to get the compiled application (or classes) in to the synced directory and that is unlikely to give you the same experience that you can get using an IDE capable of hot-reloading. That’s something that Microclimate looked to address…

 

draft up

Friday, September 21st, 2018

For my next few posts, I thought I’d pick on some of the technologies that Jenkins X uses under the covers. The first of these is Draft, originally from Deis but it went with them to Microsoft. Draft’s aim is to streamline the process of developing code that runs on Kubernetes. It’s evolved a bit since it was originally released: having started with a client-server architecture, it is now entirely client based. There are many good reasons for this although one of the things that differentiated Draft originally was that it didn’t need anything on the developer’s machine other than Draft itself: not even Docker.

The part of Draft that Jenkins X uses is the ability to add a Dockerfile and Helm chart to an existing project. The combination of Dockerfile and Helm chart is stored in what Draft calls a ‘pack’. On running a draft create, Draft does some nifty analysis to detect the language being used in the project in order to select the appropriate pack to use. As you and I know though, language alone is not going to tell me whether I’m, say, running an executable JAR or providing a WAR file to run on an app server. Fortunately, there’s a --pack option so I can tell Draft which pack to use. The pack mechanism is nicely extensible with the ability to specify new repositories (simply a Git repo containing a packs folder). The packs used by Jenkins X (which include one for Liberty even if it isn’t very good) can be found here. Draft is also clever enough to know that, if I already have a Dockerfile or chart, I probably don’t want the one from the pack instead.

Once I have a Dockerfile and chart, the next step is to deploy my application using draft up. Draft’s expecting to find the Kubernetes context and Helm already set-up. It’ll build the Docker image and if a registry is configured push the image there. The latter isn’t compulsory though so if I’m using Docker Desktop (the new name for Docker for Mac/Windows) or have my Docker client pointing at my Minikube Docker daemon then I can just use the image out of the cache. It will then use Helm to deploy the application, passing through overrides for image.repository and image.tag to reference the image that’s just been built (using a unique tag). It will even set up an imagePullSecret if necessary. You can use draft logs to see the output from the build and deploy.

Originally, Draft came with a ‘watch’ mode where it would attempt to detect file updates and automatically rebuild. Thankfully that now seems to have been dropped as, with a completely unoptimised build cycle, it really wasn’t practical. The Java pack is particularly bad as the provided Dockerfile doesn’t even attempt to cache the Maven dependencies. Now you simply run draft up again to trigger a rebuild (which you could hook up to your editor’s save option if you really wished).

The last part of the Draft developer experience is draft connect which pipes the logs from any deployed containers to your terminal, along with setting up port forwarding. Sensibly, it allows you configure the local ports that you want to forward to and this, along with other configuration, can be stored in a draft.toml file with your application. (The authors have to be congratulated for breaking with the current trend and using TOML rather than YAML!)

There are a few extra niceties in that you can define additional plugins (arbitrary commands that share the Draft meta-data via environment variables) and you can define tasks for a project that execute pre-up, post-deploy, and on cleanup. If, like me, you left wondering where these are documented, check out the Draft Enhancement Proposals where they were introduced.

All-in-all, there is nothing here that couldn’t be achieved here by scripting together a few standard commands but just because it’s simple, doesn’t mean that it isn’t useful. It’s one of many projects that are attempting to reduce developer friction when deploying to Kubernetes, and you can expect a few more posts covering others…

Jenkins X

Wednesday, September 19th, 2018